00:00:00.000 Started by upstream project "autotest-per-patch" build number 132803 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.143 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.212 > git --version # 'git version 2.39.2' 00:00:00.212 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.319 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.331 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.342 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.342 > git config core.sparsecheckout # timeout=10 00:00:07.353 > git read-tree -mu HEAD # timeout=10 00:00:07.370 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.394 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.394 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.524 [Pipeline] Start of Pipeline 00:00:07.536 [Pipeline] library 00:00:07.537 Loading library shm_lib@master 00:00:07.538 Library shm_lib@master is cached. Copying from home. 00:00:07.555 [Pipeline] node 00:00:07.568 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.569 [Pipeline] { 00:00:07.579 [Pipeline] catchError 00:00:07.580 [Pipeline] { 00:00:07.593 [Pipeline] wrap 00:00:07.602 [Pipeline] { 00:00:07.612 [Pipeline] stage 00:00:07.614 [Pipeline] { (Prologue) 00:00:07.633 [Pipeline] echo 00:00:07.635 Node: VM-host-SM38 00:00:07.642 [Pipeline] cleanWs 00:00:07.653 [WS-CLEANUP] Deleting project workspace... 00:00:07.653 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.660 [WS-CLEANUP] done 00:00:07.899 [Pipeline] setCustomBuildProperty 00:00:08.011 [Pipeline] httpRequest 00:00:08.563 [Pipeline] echo 00:00:08.565 Sorcerer 10.211.164.112 is alive 00:00:08.574 [Pipeline] retry 00:00:08.576 [Pipeline] { 00:00:08.588 [Pipeline] httpRequest 00:00:08.593 HttpMethod: GET 00:00:08.594 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.594 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.611 Response Code: HTTP/1.1 200 OK 00:00:08.611 Success: Status code 200 is in the accepted range: 200,404 00:00:08.612 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.153 [Pipeline] } 00:00:13.169 [Pipeline] // retry 00:00:13.177 [Pipeline] sh 00:00:13.479 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.496 [Pipeline] httpRequest 00:00:14.734 [Pipeline] echo 00:00:14.736 Sorcerer 10.211.164.112 is alive 00:00:14.745 [Pipeline] retry 00:00:14.747 [Pipeline] { 00:00:14.760 [Pipeline] httpRequest 00:00:14.765 HttpMethod: GET 00:00:14.766 URL: http://10.211.164.112/packages/spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:00:14.766 Sending request to url: http://10.211.164.112/packages/spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:00:14.792 Response Code: HTTP/1.1 200 OK 00:00:14.792 Success: Status code 200 is in the accepted range: 200,404 00:00:14.793 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:01:55.341 [Pipeline] } 00:01:55.359 [Pipeline] // retry 00:01:55.367 [Pipeline] sh 00:01:55.655 + tar --no-same-owner -xf spdk_805149865615b53d95323f8dff48d360219afb4e.tar.gz 00:01:58.305 [Pipeline] sh 00:01:58.589 + git -C spdk log --oneline -n5 00:01:58.589 805149865 build: use VERSION file for storing version 00:01:58.589 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:58.589 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:58.590 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:58.590 e2dfdf06c accel/mlx5: Register post_poller handler 00:01:58.608 [Pipeline] writeFile 00:01:58.621 [Pipeline] sh 00:01:58.908 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:58.921 [Pipeline] sh 00:01:59.207 + cat autorun-spdk.conf 00:01:59.207 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.207 SPDK_TEST_NVME=1 00:01:59.207 SPDK_TEST_FTL=1 00:01:59.207 SPDK_TEST_ISAL=1 00:01:59.207 SPDK_RUN_ASAN=1 00:01:59.207 SPDK_RUN_UBSAN=1 00:01:59.207 SPDK_TEST_XNVME=1 00:01:59.207 SPDK_TEST_NVME_FDP=1 00:01:59.207 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.216 RUN_NIGHTLY=0 00:01:59.217 [Pipeline] } 00:01:59.230 [Pipeline] // stage 00:01:59.243 [Pipeline] stage 00:01:59.245 [Pipeline] { (Run VM) 00:01:59.256 [Pipeline] sh 00:01:59.539 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:59.539 + echo 'Start stage prepare_nvme.sh' 00:01:59.539 Start stage prepare_nvme.sh 00:01:59.539 + [[ -n 10 ]] 00:01:59.539 + disk_prefix=ex10 00:01:59.539 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:59.539 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:59.539 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:59.539 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.539 ++ SPDK_TEST_NVME=1 00:01:59.539 ++ SPDK_TEST_FTL=1 00:01:59.539 ++ SPDK_TEST_ISAL=1 00:01:59.539 ++ SPDK_RUN_ASAN=1 00:01:59.539 ++ SPDK_RUN_UBSAN=1 00:01:59.539 ++ SPDK_TEST_XNVME=1 00:01:59.539 ++ SPDK_TEST_NVME_FDP=1 00:01:59.539 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.539 ++ RUN_NIGHTLY=0 00:01:59.539 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:59.539 + nvme_files=() 00:01:59.539 + declare -A nvme_files 00:01:59.539 + backend_dir=/var/lib/libvirt/images/backends 00:01:59.539 + nvme_files['nvme.img']=5G 00:01:59.540 + nvme_files['nvme-cmb.img']=5G 00:01:59.540 + nvme_files['nvme-multi0.img']=4G 00:01:59.540 + nvme_files['nvme-multi1.img']=4G 00:01:59.540 + nvme_files['nvme-multi2.img']=4G 00:01:59.540 + nvme_files['nvme-openstack.img']=8G 00:01:59.540 + nvme_files['nvme-zns.img']=5G 00:01:59.540 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:59.540 + (( SPDK_TEST_FTL == 1 )) 00:01:59.540 + nvme_files["nvme-ftl.img"]=6G 00:01:59.540 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:59.540 + nvme_files["nvme-fdp.img"]=1G 00:01:59.540 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:59.540 + for nvme in "${!nvme_files[@]}" 00:01:59.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:59.540 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.540 + for nvme in "${!nvme_files[@]}" 00:01:59.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:02:00.110 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:00.110 + for nvme in "${!nvme_files[@]}" 00:02:00.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:02:00.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.111 + for nvme in "${!nvme_files[@]}" 00:02:00.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:02:00.111 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.111 + for nvme in "${!nvme_files[@]}" 00:02:00.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:02:00.372 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.372 + for nvme in "${!nvme_files[@]}" 00:02:00.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:02:00.372 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.372 + for nvme in "${!nvme_files[@]}" 00:02:00.372 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:02:00.633 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.633 + for nvme in "${!nvme_files[@]}" 00:02:00.633 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:02:00.633 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:00.633 + for nvme in "${!nvme_files[@]}" 00:02:00.633 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:02:01.207 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.207 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:02:01.207 + echo 'End stage prepare_nvme.sh' 00:02:01.207 End stage prepare_nvme.sh 00:02:01.220 [Pipeline] sh 00:02:01.505 + DISTRO=fedora39 00:02:01.505 + CPUS=10 00:02:01.505 + RAM=12288 00:02:01.505 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.505 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:01.505 00:02:01.505 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:01.505 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:01.505 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:01.505 HELP=0 00:02:01.505 DRY_RUN=0 00:02:01.505 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:02:01.505 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:01.505 NVME_AUTO_CREATE=0 00:02:01.505 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:02:01.505 NVME_CMB=,,,, 00:02:01.505 NVME_PMR=,,,, 00:02:01.505 NVME_ZNS=,,,, 00:02:01.505 NVME_MS=true,,,, 00:02:01.505 NVME_FDP=,,,on, 00:02:01.505 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.505 SPDK_VAGRANT_VMCPU=10 00:02:01.505 SPDK_VAGRANT_VMRAM=12288 00:02:01.505 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.505 SPDK_VAGRANT_HTTP_PROXY= 00:02:01.505 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.505 SPDK_OPENSTACK_NETWORK=0 00:02:01.505 VAGRANT_PACKAGE_BOX=0 00:02:01.505 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:01.505 FORCE_DISTRO=true 00:02:01.505 VAGRANT_BOX_VERSION= 00:02:01.505 EXTRA_VAGRANTFILES= 00:02:01.505 NIC_MODEL=e1000 00:02:01.505 00:02:01.505 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:01.505 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:04.057 Bringing machine 'default' up with 'libvirt' provider... 00:02:04.629 ==> default: Creating image (snapshot of base box volume). 00:02:04.629 ==> default: Creating domain with the following settings... 00:02:04.629 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733754882_beff8def35d2e25cfc2c 00:02:04.629 ==> default: -- Domain type: kvm 00:02:04.629 ==> default: -- Cpus: 10 00:02:04.629 ==> default: -- Feature: acpi 00:02:04.629 ==> default: -- Feature: apic 00:02:04.629 ==> default: -- Feature: pae 00:02:04.629 ==> default: -- Memory: 12288M 00:02:04.630 ==> default: -- Memory Backing: hugepages: 00:02:04.630 ==> default: -- Management MAC: 00:02:04.630 ==> default: -- Loader: 00:02:04.630 ==> default: -- Nvram: 00:02:04.630 ==> default: -- Base box: spdk/fedora39 00:02:04.630 ==> default: -- Storage pool: default 00:02:04.630 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733754882_beff8def35d2e25cfc2c.img (20G) 00:02:04.630 ==> default: -- Volume Cache: default 00:02:04.630 ==> default: -- Kernel: 00:02:04.630 ==> default: -- Initrd: 00:02:04.630 ==> default: -- Graphics Type: vnc 00:02:04.630 ==> default: -- Graphics Port: -1 00:02:04.630 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.630 ==> default: -- Graphics Password: Not defined 00:02:04.630 ==> default: -- Video Type: cirrus 00:02:04.630 ==> default: -- Video VRAM: 9216 00:02:04.630 ==> default: -- Sound Type: 00:02:04.630 ==> default: -- Keymap: en-us 00:02:04.630 ==> default: -- TPM Path: 00:02:04.630 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.630 ==> default: -- Command line args: 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.630 ==> default: -> value=-drive, 00:02:04.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.630 ==> default: -> value=-drive, 00:02:04.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:04.630 ==> default: -> value=-drive, 00:02:04.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.630 ==> default: -> value=-drive, 00:02:04.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.630 ==> default: -> value=-drive, 00:02:04.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:04.630 ==> default: -> value=-drive, 00:02:04.630 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:04.630 ==> default: -> value=-device, 00:02:04.630 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.890 ==> default: Creating shared folders metadata... 00:02:04.890 ==> default: Starting domain. 00:02:07.436 ==> default: Waiting for domain to get an IP address... 00:02:29.494 ==> default: Waiting for SSH to become available... 00:02:29.494 ==> default: Configuring and enabling network interfaces... 00:02:33.689 default: SSH address: 192.168.121.241:22 00:02:33.689 default: SSH username: vagrant 00:02:33.689 default: SSH auth method: private key 00:02:35.601 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:45.600 ==> default: Mounting SSHFS shared folder... 00:02:46.580 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:46.580 ==> default: Checking Mount.. 00:02:47.965 ==> default: Folder Successfully Mounted! 00:02:47.965 00:02:47.965 SUCCESS! 00:02:47.965 00:02:47.965 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:47.965 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:47.965 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:47.965 00:02:47.975 [Pipeline] } 00:02:47.989 [Pipeline] // stage 00:02:47.997 [Pipeline] dir 00:02:47.998 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:47.999 [Pipeline] { 00:02:48.010 [Pipeline] catchError 00:02:48.012 [Pipeline] { 00:02:48.021 [Pipeline] sh 00:02:48.301 + vagrant ssh-config --host vagrant 00:02:48.302 + tee ssh_conf 00:02:48.302 + sed -ne '/^Host/,$p' 00:02:51.603 Host vagrant 00:02:51.603 HostName 192.168.121.241 00:02:51.603 User vagrant 00:02:51.603 Port 22 00:02:51.603 UserKnownHostsFile /dev/null 00:02:51.603 StrictHostKeyChecking no 00:02:51.603 PasswordAuthentication no 00:02:51.603 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:51.603 IdentitiesOnly yes 00:02:51.603 LogLevel FATAL 00:02:51.603 ForwardAgent yes 00:02:51.603 ForwardX11 yes 00:02:51.603 00:02:51.617 [Pipeline] withEnv 00:02:51.619 [Pipeline] { 00:02:51.633 [Pipeline] sh 00:02:51.916 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:51.917 source /etc/os-release 00:02:51.917 [[ -e /image.version ]] && img=$(< /image.version) 00:02:51.917 # Minimal, systemd-like check. 00:02:51.917 if [[ -e /.dockerenv ]]; then 00:02:51.917 # Clear garbage from the node'\''s name: 00:02:51.917 # agt-er_autotest_547-896 -> autotest_547-896 00:02:51.917 # $HOSTNAME is the actual container id 00:02:51.917 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:51.917 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:51.917 # We can assume this is a mount from a host where container is running, 00:02:51.917 # so fetch its hostname to easily identify the target swarm worker. 00:02:51.917 container="$(< /etc/hostname) ($agent)" 00:02:51.917 else 00:02:51.917 # Fallback 00:02:51.917 container=$agent 00:02:51.917 fi 00:02:51.917 fi 00:02:51.917 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:51.917 ' 00:02:52.191 [Pipeline] } 00:02:52.206 [Pipeline] // withEnv 00:02:52.213 [Pipeline] setCustomBuildProperty 00:02:52.226 [Pipeline] stage 00:02:52.227 [Pipeline] { (Tests) 00:02:52.242 [Pipeline] sh 00:02:52.528 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:52.805 [Pipeline] sh 00:02:53.097 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:53.375 [Pipeline] timeout 00:02:53.376 Timeout set to expire in 50 min 00:02:53.377 [Pipeline] { 00:02:53.391 [Pipeline] sh 00:02:53.676 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:54.244 HEAD is now at 805149865 build: use VERSION file for storing version 00:02:54.256 [Pipeline] sh 00:02:54.535 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:54.807 [Pipeline] sh 00:02:55.089 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:55.361 [Pipeline] sh 00:02:55.645 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:55.906 ++ readlink -f spdk_repo 00:02:55.906 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:55.906 + [[ -n /home/vagrant/spdk_repo ]] 00:02:55.906 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:55.906 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:55.906 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:55.906 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:55.906 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:55.906 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:55.906 + cd /home/vagrant/spdk_repo 00:02:55.906 + source /etc/os-release 00:02:55.906 ++ NAME='Fedora Linux' 00:02:55.906 ++ VERSION='39 (Cloud Edition)' 00:02:55.906 ++ ID=fedora 00:02:55.906 ++ VERSION_ID=39 00:02:55.906 ++ VERSION_CODENAME= 00:02:55.906 ++ PLATFORM_ID=platform:f39 00:02:55.906 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:55.906 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:55.906 ++ LOGO=fedora-logo-icon 00:02:55.906 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:55.906 ++ HOME_URL=https://fedoraproject.org/ 00:02:55.906 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:55.906 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:55.906 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:55.906 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:55.906 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:55.906 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:55.906 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:55.906 ++ SUPPORT_END=2024-11-12 00:02:55.906 ++ VARIANT='Cloud Edition' 00:02:55.906 ++ VARIANT_ID=cloud 00:02:55.906 + uname -a 00:02:55.906 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:55.906 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:56.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:56.740 Hugepages 00:02:56.740 node hugesize free / total 00:02:56.740 node0 1048576kB 0 / 0 00:02:56.740 node0 2048kB 0 / 0 00:02:56.740 00:02:56.740 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.740 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:56.740 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:56.740 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:56.740 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:56.740 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:56.740 + rm -f /tmp/spdk-ld-path 00:02:56.740 + source autorun-spdk.conf 00:02:56.740 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.740 ++ SPDK_TEST_NVME=1 00:02:56.740 ++ SPDK_TEST_FTL=1 00:02:56.740 ++ SPDK_TEST_ISAL=1 00:02:56.740 ++ SPDK_RUN_ASAN=1 00:02:56.740 ++ SPDK_RUN_UBSAN=1 00:02:56.740 ++ SPDK_TEST_XNVME=1 00:02:56.740 ++ SPDK_TEST_NVME_FDP=1 00:02:56.740 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.740 ++ RUN_NIGHTLY=0 00:02:56.740 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:56.740 + [[ -n '' ]] 00:02:56.740 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:56.740 + for M in /var/spdk/build-*-manifest.txt 00:02:56.740 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:56.740 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.740 + for M in /var/spdk/build-*-manifest.txt 00:02:56.740 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:56.740 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.740 + for M in /var/spdk/build-*-manifest.txt 00:02:56.740 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:56.740 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.740 ++ uname 00:02:56.740 + [[ Linux == \L\i\n\u\x ]] 00:02:56.740 + sudo dmesg -T 00:02:56.740 + sudo dmesg --clear 00:02:56.740 + dmesg_pid=5022 00:02:56.740 + [[ Fedora Linux == FreeBSD ]] 00:02:56.740 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.740 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.740 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:56.740 + [[ -x /usr/src/fio-static/fio ]] 00:02:56.740 + sudo dmesg -Tw 00:02:56.740 + export FIO_BIN=/usr/src/fio-static/fio 00:02:56.740 + FIO_BIN=/usr/src/fio-static/fio 00:02:56.740 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:56.740 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:56.740 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:56.740 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.740 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.740 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:56.740 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.741 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.741 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.002 14:35:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:57.002 14:35:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.002 14:35:34 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:57.002 14:35:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:57.002 14:35:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.002 14:35:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:57.002 14:35:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:57.002 14:35:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:57.002 14:35:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:57.003 14:35:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.003 14:35:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.003 14:35:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.003 14:35:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.003 14:35:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.003 14:35:34 -- paths/export.sh@5 -- $ export PATH 00:02:57.003 14:35:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.003 14:35:34 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:57.003 14:35:34 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:57.003 14:35:34 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733754934.XXXXXX 00:02:57.003 14:35:34 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733754934.P0vBjd 00:02:57.003 14:35:34 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:57.003 14:35:34 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:57.003 14:35:34 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:57.003 14:35:34 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:57.003 14:35:34 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:57.003 14:35:34 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:57.003 14:35:34 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:57.003 14:35:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.003 Traceback (most recent call last): 00:02:57.003 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:02:57.003 import spdk.rpc as rpc # noqa 00:02:57.003 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:57.003 File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in 00:02:57.003 from .version import __version__ 00:02:57.003 ModuleNotFoundError: No module named 'spdk.version' 00:02:57.003 14:35:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:57.003 14:35:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:57.003 14:35:35 -- pm/common@17 -- $ local monitor 00:02:57.003 14:35:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.003 14:35:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.003 14:35:35 -- pm/common@25 -- $ sleep 1 00:02:57.003 14:35:35 -- pm/common@21 -- $ date +%s 00:02:57.003 14:35:35 -- pm/common@21 -- $ date +%s 00:02:57.003 14:35:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733754935 00:02:57.003 14:35:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733754935 00:02:57.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733754935_collect-vmstat.pm.log 00:02:57.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733754935_collect-cpu-load.pm.log 00:02:57.003 Traceback (most recent call last): 00:02:57.003 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:02:57.003 import spdk.rpc as rpc # noqa 00:02:57.003 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:57.003 File "/home/vagrant/spdk_repo/spdk/python/spdk/__init__.py", line 5, in 00:02:57.003 from .version import __version__ 00:02:57.003 ModuleNotFoundError: No module named 'spdk.version' 00:02:57.944 14:35:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:57.944 14:35:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:57.944 14:35:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:57.944 14:35:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:57.944 14:35:36 -- spdk/autobuild.sh@16 -- $ date -u 00:02:57.944 Mon Dec 9 02:35:36 PM UTC 2024 00:02:57.944 14:35:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:57.944 v25.01-pre-304-g805149865 00:02:57.945 14:35:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:57.945 14:35:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:57.945 14:35:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.945 14:35:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.945 14:35:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.945 ************************************ 00:02:57.945 START TEST asan 00:02:57.945 ************************************ 00:02:57.945 using asan 00:02:57.945 14:35:36 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:57.945 00:02:57.945 real 0m0.000s 00:02:57.945 user 0m0.000s 00:02:57.945 sys 0m0.000s 00:02:57.945 14:35:36 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.945 ************************************ 00:02:57.945 END TEST asan 00:02:57.945 ************************************ 00:02:57.945 14:35:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.205 14:35:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:58.205 14:35:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:58.205 14:35:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:58.205 14:35:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:58.205 14:35:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.205 ************************************ 00:02:58.205 START TEST ubsan 00:02:58.206 ************************************ 00:02:58.206 using ubsan 00:02:58.206 14:35:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:58.206 00:02:58.206 real 0m0.000s 00:02:58.206 user 0m0.000s 00:02:58.206 sys 0m0.000s 00:02:58.206 ************************************ 00:02:58.206 END TEST ubsan 00:02:58.206 ************************************ 00:02:58.206 14:35:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:58.206 14:35:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.206 14:35:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:58.206 14:35:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.206 14:35:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.206 14:35:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.206 14:35:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.206 14:35:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.206 14:35:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.206 14:35:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.206 14:35:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:58.206 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:58.206 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:58.776 Using 'verbs' RDMA provider 00:03:11.946 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:21.912 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:21.912 Creating mk/config.mk...done. 00:03:21.912 Creating mk/cc.flags.mk...done. 00:03:21.912 Type 'make' to build. 00:03:21.912 14:35:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:21.912 14:35:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:21.912 14:35:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:21.912 14:35:59 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.912 ************************************ 00:03:21.912 START TEST make 00:03:21.912 ************************************ 00:03:21.912 14:35:59 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:21.912 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:21.912 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:21.912 meson setup builddir \ 00:03:21.912 -Dwith-libaio=enabled \ 00:03:21.912 -Dwith-liburing=enabled \ 00:03:21.912 -Dwith-libvfn=disabled \ 00:03:21.912 -Dwith-spdk=disabled \ 00:03:21.912 -Dexamples=false \ 00:03:21.912 -Dtests=false \ 00:03:21.912 -Dtools=false && \ 00:03:21.912 meson compile -C builddir && \ 00:03:21.912 cd -) 00:03:23.810 The Meson build system 00:03:23.810 Version: 1.5.0 00:03:23.810 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:23.810 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:23.810 Build type: native build 00:03:23.810 Project name: xnvme 00:03:23.810 Project version: 0.7.5 00:03:23.810 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:23.810 C linker for the host machine: cc ld.bfd 2.40-14 00:03:23.810 Host machine cpu family: x86_64 00:03:23.810 Host machine cpu: x86_64 00:03:23.810 Message: host_machine.system: linux 00:03:23.810 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:23.810 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:23.810 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:23.810 Run-time dependency threads found: YES 00:03:23.810 Has header "setupapi.h" : NO 00:03:23.810 Has header "linux/blkzoned.h" : YES 00:03:23.810 Has header "linux/blkzoned.h" : YES (cached) 00:03:23.810 Has header "libaio.h" : YES 00:03:23.810 Library aio found: YES 00:03:23.810 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:23.810 Run-time dependency liburing found: YES 2.2 00:03:23.810 Dependency libvfn skipped: feature with-libvfn disabled 00:03:23.810 Found CMake: /usr/bin/cmake (3.27.7) 00:03:23.810 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:23.810 Subproject spdk : skipped: feature with-spdk disabled 00:03:23.810 Run-time dependency appleframeworks found: NO (tried framework) 00:03:23.810 Run-time dependency appleframeworks found: NO (tried framework) 00:03:23.810 Library rt found: YES 00:03:23.810 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:23.810 Configuring xnvme_config.h using configuration 00:03:23.810 Configuring xnvme.spec using configuration 00:03:23.810 Run-time dependency bash-completion found: YES 2.11 00:03:23.810 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:23.810 Program cp found: YES (/usr/bin/cp) 00:03:23.810 Build targets in project: 3 00:03:23.810 00:03:23.810 xnvme 0.7.5 00:03:23.810 00:03:23.810 Subprojects 00:03:23.810 spdk : NO Feature 'with-spdk' disabled 00:03:23.810 00:03:23.810 User defined options 00:03:23.810 examples : false 00:03:23.810 tests : false 00:03:23.810 tools : false 00:03:23.810 with-libaio : enabled 00:03:23.810 with-liburing: enabled 00:03:23.810 with-libvfn : disabled 00:03:23.810 with-spdk : disabled 00:03:23.810 00:03:23.810 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:24.067 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:24.067 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:24.067 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:24.067 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:24.067 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:24.067 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:24.067 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:24.067 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:24.067 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:24.067 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:24.067 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:24.067 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:24.067 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:24.067 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:24.067 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:24.067 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:24.067 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:24.356 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:24.356 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:24.356 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:24.356 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:24.356 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:24.356 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:24.356 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:24.356 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:24.356 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:24.356 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:24.356 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:24.356 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:24.356 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:24.356 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:24.356 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:24.356 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:24.356 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:24.356 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:24.356 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:24.356 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:24.356 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:24.356 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:24.356 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:24.356 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:24.356 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:24.356 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:24.356 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:24.356 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:24.356 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:24.356 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:24.356 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:24.356 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:24.356 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:24.356 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:24.356 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:24.356 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:24.356 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:24.356 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:24.627 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:24.628 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:24.628 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:24.628 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:24.628 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:24.628 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:24.628 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:24.628 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:24.628 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:24.628 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:24.628 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:24.628 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:24.628 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:24.628 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:24.628 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:24.628 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:24.628 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:24.628 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:24.886 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:25.144 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:25.144 [75/76] Linking static target lib/libxnvme.a 00:03:25.144 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:25.144 INFO: autodetecting backend as ninja 00:03:25.144 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:25.144 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:31.705 The Meson build system 00:03:31.705 Version: 1.5.0 00:03:31.705 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:31.705 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:31.705 Build type: native build 00:03:31.705 Program cat found: YES (/usr/bin/cat) 00:03:31.705 Project name: DPDK 00:03:31.705 Project version: 24.03.0 00:03:31.705 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:31.705 C linker for the host machine: cc ld.bfd 2.40-14 00:03:31.705 Host machine cpu family: x86_64 00:03:31.705 Host machine cpu: x86_64 00:03:31.705 Message: ## Building in Developer Mode ## 00:03:31.705 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:31.705 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:31.705 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:31.705 Program python3 found: YES (/usr/bin/python3) 00:03:31.705 Program cat found: YES (/usr/bin/cat) 00:03:31.705 Compiler for C supports arguments -march=native: YES 00:03:31.705 Checking for size of "void *" : 8 00:03:31.705 Checking for size of "void *" : 8 (cached) 00:03:31.705 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:31.705 Library m found: YES 00:03:31.705 Library numa found: YES 00:03:31.705 Has header "numaif.h" : YES 00:03:31.705 Library fdt found: NO 00:03:31.705 Library execinfo found: NO 00:03:31.705 Has header "execinfo.h" : YES 00:03:31.705 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:31.705 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:31.705 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:31.705 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:31.705 Run-time dependency openssl found: YES 3.1.1 00:03:31.705 Run-time dependency libpcap found: YES 1.10.4 00:03:31.705 Has header "pcap.h" with dependency libpcap: YES 00:03:31.705 Compiler for C supports arguments -Wcast-qual: YES 00:03:31.705 Compiler for C supports arguments -Wdeprecated: YES 00:03:31.705 Compiler for C supports arguments -Wformat: YES 00:03:31.705 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:31.705 Compiler for C supports arguments -Wformat-security: NO 00:03:31.705 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:31.705 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:31.705 Compiler for C supports arguments -Wnested-externs: YES 00:03:31.705 Compiler for C supports arguments -Wold-style-definition: YES 00:03:31.705 Compiler for C supports arguments -Wpointer-arith: YES 00:03:31.705 Compiler for C supports arguments -Wsign-compare: YES 00:03:31.705 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:31.705 Compiler for C supports arguments -Wundef: YES 00:03:31.705 Compiler for C supports arguments -Wwrite-strings: YES 00:03:31.705 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:31.705 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:31.705 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:31.705 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:31.705 Program objdump found: YES (/usr/bin/objdump) 00:03:31.706 Compiler for C supports arguments -mavx512f: YES 00:03:31.706 Checking if "AVX512 checking" compiles: YES 00:03:31.706 Fetching value of define "__SSE4_2__" : 1 00:03:31.706 Fetching value of define "__AES__" : 1 00:03:31.706 Fetching value of define "__AVX__" : 1 00:03:31.706 Fetching value of define "__AVX2__" : 1 00:03:31.706 Fetching value of define "__AVX512BW__" : 1 00:03:31.706 Fetching value of define "__AVX512CD__" : 1 00:03:31.706 Fetching value of define "__AVX512DQ__" : 1 00:03:31.706 Fetching value of define "__AVX512F__" : 1 00:03:31.706 Fetching value of define "__AVX512VL__" : 1 00:03:31.706 Fetching value of define "__PCLMUL__" : 1 00:03:31.706 Fetching value of define "__RDRND__" : 1 00:03:31.706 Fetching value of define "__RDSEED__" : 1 00:03:31.706 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:31.706 Fetching value of define "__znver1__" : (undefined) 00:03:31.706 Fetching value of define "__znver2__" : (undefined) 00:03:31.706 Fetching value of define "__znver3__" : (undefined) 00:03:31.706 Fetching value of define "__znver4__" : (undefined) 00:03:31.706 Library asan found: YES 00:03:31.706 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:31.706 Message: lib/log: Defining dependency "log" 00:03:31.706 Message: lib/kvargs: Defining dependency "kvargs" 00:03:31.706 Message: lib/telemetry: Defining dependency "telemetry" 00:03:31.706 Library rt found: YES 00:03:31.706 Checking for function "getentropy" : NO 00:03:31.706 Message: lib/eal: Defining dependency "eal" 00:03:31.706 Message: lib/ring: Defining dependency "ring" 00:03:31.706 Message: lib/rcu: Defining dependency "rcu" 00:03:31.706 Message: lib/mempool: Defining dependency "mempool" 00:03:31.706 Message: lib/mbuf: Defining dependency "mbuf" 00:03:31.706 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:31.706 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:31.706 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:31.706 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:31.706 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:31.706 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:31.706 Compiler for C supports arguments -mpclmul: YES 00:03:31.706 Compiler for C supports arguments -maes: YES 00:03:31.706 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:31.706 Compiler for C supports arguments -mavx512bw: YES 00:03:31.706 Compiler for C supports arguments -mavx512dq: YES 00:03:31.706 Compiler for C supports arguments -mavx512vl: YES 00:03:31.706 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:31.706 Compiler for C supports arguments -mavx2: YES 00:03:31.706 Compiler for C supports arguments -mavx: YES 00:03:31.706 Message: lib/net: Defining dependency "net" 00:03:31.706 Message: lib/meter: Defining dependency "meter" 00:03:31.706 Message: lib/ethdev: Defining dependency "ethdev" 00:03:31.706 Message: lib/pci: Defining dependency "pci" 00:03:31.706 Message: lib/cmdline: Defining dependency "cmdline" 00:03:31.706 Message: lib/hash: Defining dependency "hash" 00:03:31.706 Message: lib/timer: Defining dependency "timer" 00:03:31.706 Message: lib/compressdev: Defining dependency "compressdev" 00:03:31.706 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:31.706 Message: lib/dmadev: Defining dependency "dmadev" 00:03:31.706 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:31.706 Message: lib/power: Defining dependency "power" 00:03:31.706 Message: lib/reorder: Defining dependency "reorder" 00:03:31.706 Message: lib/security: Defining dependency "security" 00:03:31.706 Has header "linux/userfaultfd.h" : YES 00:03:31.706 Has header "linux/vduse.h" : YES 00:03:31.706 Message: lib/vhost: Defining dependency "vhost" 00:03:31.706 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:31.706 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:31.706 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:31.706 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:31.706 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:31.706 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:31.706 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:31.706 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:31.706 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:31.706 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:31.706 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:31.706 Configuring doxy-api-html.conf using configuration 00:03:31.706 Configuring doxy-api-man.conf using configuration 00:03:31.706 Program mandb found: YES (/usr/bin/mandb) 00:03:31.706 Program sphinx-build found: NO 00:03:31.706 Configuring rte_build_config.h using configuration 00:03:31.706 Message: 00:03:31.706 ================= 00:03:31.706 Applications Enabled 00:03:31.706 ================= 00:03:31.706 00:03:31.706 apps: 00:03:31.706 00:03:31.706 00:03:31.706 Message: 00:03:31.706 ================= 00:03:31.706 Libraries Enabled 00:03:31.706 ================= 00:03:31.706 00:03:31.706 libs: 00:03:31.706 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:31.706 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:31.706 cryptodev, dmadev, power, reorder, security, vhost, 00:03:31.706 00:03:31.706 Message: 00:03:31.706 =============== 00:03:31.706 Drivers Enabled 00:03:31.706 =============== 00:03:31.706 00:03:31.706 common: 00:03:31.706 00:03:31.706 bus: 00:03:31.706 pci, vdev, 00:03:31.706 mempool: 00:03:31.706 ring, 00:03:31.706 dma: 00:03:31.706 00:03:31.706 net: 00:03:31.706 00:03:31.706 crypto: 00:03:31.706 00:03:31.706 compress: 00:03:31.706 00:03:31.706 vdpa: 00:03:31.706 00:03:31.706 00:03:31.706 Message: 00:03:31.706 ================= 00:03:31.706 Content Skipped 00:03:31.706 ================= 00:03:31.706 00:03:31.706 apps: 00:03:31.706 dumpcap: explicitly disabled via build config 00:03:31.706 graph: explicitly disabled via build config 00:03:31.706 pdump: explicitly disabled via build config 00:03:31.706 proc-info: explicitly disabled via build config 00:03:31.706 test-acl: explicitly disabled via build config 00:03:31.706 test-bbdev: explicitly disabled via build config 00:03:31.706 test-cmdline: explicitly disabled via build config 00:03:31.706 test-compress-perf: explicitly disabled via build config 00:03:31.706 test-crypto-perf: explicitly disabled via build config 00:03:31.706 test-dma-perf: explicitly disabled via build config 00:03:31.706 test-eventdev: explicitly disabled via build config 00:03:31.706 test-fib: explicitly disabled via build config 00:03:31.706 test-flow-perf: explicitly disabled via build config 00:03:31.706 test-gpudev: explicitly disabled via build config 00:03:31.706 test-mldev: explicitly disabled via build config 00:03:31.706 test-pipeline: explicitly disabled via build config 00:03:31.706 test-pmd: explicitly disabled via build config 00:03:31.706 test-regex: explicitly disabled via build config 00:03:31.706 test-sad: explicitly disabled via build config 00:03:31.706 test-security-perf: explicitly disabled via build config 00:03:31.706 00:03:31.706 libs: 00:03:31.706 argparse: explicitly disabled via build config 00:03:31.706 metrics: explicitly disabled via build config 00:03:31.706 acl: explicitly disabled via build config 00:03:31.706 bbdev: explicitly disabled via build config 00:03:31.706 bitratestats: explicitly disabled via build config 00:03:31.706 bpf: explicitly disabled via build config 00:03:31.706 cfgfile: explicitly disabled via build config 00:03:31.706 distributor: explicitly disabled via build config 00:03:31.706 efd: explicitly disabled via build config 00:03:31.706 eventdev: explicitly disabled via build config 00:03:31.706 dispatcher: explicitly disabled via build config 00:03:31.706 gpudev: explicitly disabled via build config 00:03:31.706 gro: explicitly disabled via build config 00:03:31.706 gso: explicitly disabled via build config 00:03:31.706 ip_frag: explicitly disabled via build config 00:03:31.706 jobstats: explicitly disabled via build config 00:03:31.706 latencystats: explicitly disabled via build config 00:03:31.706 lpm: explicitly disabled via build config 00:03:31.706 member: explicitly disabled via build config 00:03:31.706 pcapng: explicitly disabled via build config 00:03:31.706 rawdev: explicitly disabled via build config 00:03:31.706 regexdev: explicitly disabled via build config 00:03:31.706 mldev: explicitly disabled via build config 00:03:31.706 rib: explicitly disabled via build config 00:03:31.706 sched: explicitly disabled via build config 00:03:31.706 stack: explicitly disabled via build config 00:03:31.706 ipsec: explicitly disabled via build config 00:03:31.706 pdcp: explicitly disabled via build config 00:03:31.706 fib: explicitly disabled via build config 00:03:31.706 port: explicitly disabled via build config 00:03:31.706 pdump: explicitly disabled via build config 00:03:31.706 table: explicitly disabled via build config 00:03:31.706 pipeline: explicitly disabled via build config 00:03:31.707 graph: explicitly disabled via build config 00:03:31.707 node: explicitly disabled via build config 00:03:31.707 00:03:31.707 drivers: 00:03:31.707 common/cpt: not in enabled drivers build config 00:03:31.707 common/dpaax: not in enabled drivers build config 00:03:31.707 common/iavf: not in enabled drivers build config 00:03:31.707 common/idpf: not in enabled drivers build config 00:03:31.707 common/ionic: not in enabled drivers build config 00:03:31.707 common/mvep: not in enabled drivers build config 00:03:31.707 common/octeontx: not in enabled drivers build config 00:03:31.707 bus/auxiliary: not in enabled drivers build config 00:03:31.707 bus/cdx: not in enabled drivers build config 00:03:31.707 bus/dpaa: not in enabled drivers build config 00:03:31.707 bus/fslmc: not in enabled drivers build config 00:03:31.707 bus/ifpga: not in enabled drivers build config 00:03:31.707 bus/platform: not in enabled drivers build config 00:03:31.707 bus/uacce: not in enabled drivers build config 00:03:31.707 bus/vmbus: not in enabled drivers build config 00:03:31.707 common/cnxk: not in enabled drivers build config 00:03:31.707 common/mlx5: not in enabled drivers build config 00:03:31.707 common/nfp: not in enabled drivers build config 00:03:31.707 common/nitrox: not in enabled drivers build config 00:03:31.707 common/qat: not in enabled drivers build config 00:03:31.707 common/sfc_efx: not in enabled drivers build config 00:03:31.707 mempool/bucket: not in enabled drivers build config 00:03:31.707 mempool/cnxk: not in enabled drivers build config 00:03:31.707 mempool/dpaa: not in enabled drivers build config 00:03:31.707 mempool/dpaa2: not in enabled drivers build config 00:03:31.707 mempool/octeontx: not in enabled drivers build config 00:03:31.707 mempool/stack: not in enabled drivers build config 00:03:31.707 dma/cnxk: not in enabled drivers build config 00:03:31.707 dma/dpaa: not in enabled drivers build config 00:03:31.707 dma/dpaa2: not in enabled drivers build config 00:03:31.707 dma/hisilicon: not in enabled drivers build config 00:03:31.707 dma/idxd: not in enabled drivers build config 00:03:31.707 dma/ioat: not in enabled drivers build config 00:03:31.707 dma/skeleton: not in enabled drivers build config 00:03:31.707 net/af_packet: not in enabled drivers build config 00:03:31.707 net/af_xdp: not in enabled drivers build config 00:03:31.707 net/ark: not in enabled drivers build config 00:03:31.707 net/atlantic: not in enabled drivers build config 00:03:31.707 net/avp: not in enabled drivers build config 00:03:31.707 net/axgbe: not in enabled drivers build config 00:03:31.707 net/bnx2x: not in enabled drivers build config 00:03:31.707 net/bnxt: not in enabled drivers build config 00:03:31.707 net/bonding: not in enabled drivers build config 00:03:31.707 net/cnxk: not in enabled drivers build config 00:03:31.707 net/cpfl: not in enabled drivers build config 00:03:31.707 net/cxgbe: not in enabled drivers build config 00:03:31.707 net/dpaa: not in enabled drivers build config 00:03:31.707 net/dpaa2: not in enabled drivers build config 00:03:31.707 net/e1000: not in enabled drivers build config 00:03:31.707 net/ena: not in enabled drivers build config 00:03:31.707 net/enetc: not in enabled drivers build config 00:03:31.707 net/enetfec: not in enabled drivers build config 00:03:31.707 net/enic: not in enabled drivers build config 00:03:31.707 net/failsafe: not in enabled drivers build config 00:03:31.707 net/fm10k: not in enabled drivers build config 00:03:31.707 net/gve: not in enabled drivers build config 00:03:31.707 net/hinic: not in enabled drivers build config 00:03:31.707 net/hns3: not in enabled drivers build config 00:03:31.707 net/i40e: not in enabled drivers build config 00:03:31.707 net/iavf: not in enabled drivers build config 00:03:31.707 net/ice: not in enabled drivers build config 00:03:31.707 net/idpf: not in enabled drivers build config 00:03:31.707 net/igc: not in enabled drivers build config 00:03:31.707 net/ionic: not in enabled drivers build config 00:03:31.707 net/ipn3ke: not in enabled drivers build config 00:03:31.707 net/ixgbe: not in enabled drivers build config 00:03:31.707 net/mana: not in enabled drivers build config 00:03:31.707 net/memif: not in enabled drivers build config 00:03:31.707 net/mlx4: not in enabled drivers build config 00:03:31.707 net/mlx5: not in enabled drivers build config 00:03:31.707 net/mvneta: not in enabled drivers build config 00:03:31.707 net/mvpp2: not in enabled drivers build config 00:03:31.707 net/netvsc: not in enabled drivers build config 00:03:31.707 net/nfb: not in enabled drivers build config 00:03:31.707 net/nfp: not in enabled drivers build config 00:03:31.707 net/ngbe: not in enabled drivers build config 00:03:31.707 net/null: not in enabled drivers build config 00:03:31.707 net/octeontx: not in enabled drivers build config 00:03:31.707 net/octeon_ep: not in enabled drivers build config 00:03:31.707 net/pcap: not in enabled drivers build config 00:03:31.707 net/pfe: not in enabled drivers build config 00:03:31.707 net/qede: not in enabled drivers build config 00:03:31.707 net/ring: not in enabled drivers build config 00:03:31.707 net/sfc: not in enabled drivers build config 00:03:31.707 net/softnic: not in enabled drivers build config 00:03:31.707 net/tap: not in enabled drivers build config 00:03:31.707 net/thunderx: not in enabled drivers build config 00:03:31.707 net/txgbe: not in enabled drivers build config 00:03:31.707 net/vdev_netvsc: not in enabled drivers build config 00:03:31.707 net/vhost: not in enabled drivers build config 00:03:31.707 net/virtio: not in enabled drivers build config 00:03:31.707 net/vmxnet3: not in enabled drivers build config 00:03:31.707 raw/*: missing internal dependency, "rawdev" 00:03:31.707 crypto/armv8: not in enabled drivers build config 00:03:31.707 crypto/bcmfs: not in enabled drivers build config 00:03:31.707 crypto/caam_jr: not in enabled drivers build config 00:03:31.707 crypto/ccp: not in enabled drivers build config 00:03:31.707 crypto/cnxk: not in enabled drivers build config 00:03:31.707 crypto/dpaa_sec: not in enabled drivers build config 00:03:31.707 crypto/dpaa2_sec: not in enabled drivers build config 00:03:31.707 crypto/ipsec_mb: not in enabled drivers build config 00:03:31.707 crypto/mlx5: not in enabled drivers build config 00:03:31.707 crypto/mvsam: not in enabled drivers build config 00:03:31.707 crypto/nitrox: not in enabled drivers build config 00:03:31.707 crypto/null: not in enabled drivers build config 00:03:31.707 crypto/octeontx: not in enabled drivers build config 00:03:31.707 crypto/openssl: not in enabled drivers build config 00:03:31.707 crypto/scheduler: not in enabled drivers build config 00:03:31.707 crypto/uadk: not in enabled drivers build config 00:03:31.707 crypto/virtio: not in enabled drivers build config 00:03:31.707 compress/isal: not in enabled drivers build config 00:03:31.707 compress/mlx5: not in enabled drivers build config 00:03:31.707 compress/nitrox: not in enabled drivers build config 00:03:31.707 compress/octeontx: not in enabled drivers build config 00:03:31.707 compress/zlib: not in enabled drivers build config 00:03:31.707 regex/*: missing internal dependency, "regexdev" 00:03:31.707 ml/*: missing internal dependency, "mldev" 00:03:31.707 vdpa/ifc: not in enabled drivers build config 00:03:31.707 vdpa/mlx5: not in enabled drivers build config 00:03:31.707 vdpa/nfp: not in enabled drivers build config 00:03:31.707 vdpa/sfc: not in enabled drivers build config 00:03:31.707 event/*: missing internal dependency, "eventdev" 00:03:31.707 baseband/*: missing internal dependency, "bbdev" 00:03:31.707 gpu/*: missing internal dependency, "gpudev" 00:03:31.707 00:03:31.707 00:03:31.707 Build targets in project: 84 00:03:31.707 00:03:31.707 DPDK 24.03.0 00:03:31.707 00:03:31.707 User defined options 00:03:31.707 buildtype : debug 00:03:31.707 default_library : shared 00:03:31.707 libdir : lib 00:03:31.707 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:31.707 b_sanitize : address 00:03:31.707 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:31.707 c_link_args : 00:03:31.707 cpu_instruction_set: native 00:03:31.707 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:31.707 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:31.707 enable_docs : false 00:03:31.707 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:31.707 enable_kmods : false 00:03:31.707 max_lcores : 128 00:03:31.707 tests : false 00:03:31.707 00:03:31.707 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:31.966 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:31.966 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:31.966 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:31.966 [3/267] Linking static target lib/librte_kvargs.a 00:03:31.966 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:32.224 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:32.224 [6/267] Linking static target lib/librte_log.a 00:03:32.482 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:32.482 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:32.482 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:32.482 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:32.482 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:32.482 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.482 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:32.482 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:32.482 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:32.482 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:32.741 [17/267] Linking static target lib/librte_telemetry.a 00:03:32.741 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:32.741 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:33.000 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:33.000 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.000 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:33.000 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:33.000 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:33.000 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:33.000 [26/267] Linking target lib/librte_log.so.24.1 00:03:33.000 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:33.000 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:33.258 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:33.258 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:33.258 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:33.258 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:33.258 [33/267] Linking target lib/librte_kvargs.so.24.1 00:03:33.516 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.516 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:33.516 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:33.516 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:33.516 [38/267] Linking target lib/librte_telemetry.so.24.1 00:03:33.516 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:33.516 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:33.516 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:33.516 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:33.516 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:33.516 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:33.775 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:33.775 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:33.775 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:33.775 [48/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:34.032 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:34.032 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:34.032 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:34.032 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:34.032 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:34.412 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:34.412 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:34.412 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:34.412 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:34.412 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:34.412 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:34.412 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:34.412 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:34.412 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:34.412 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:34.412 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:34.671 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:34.671 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:34.671 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:34.671 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:34.671 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:34.929 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:34.930 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:34.930 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:34.930 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:34.930 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:34.930 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:34.930 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:34.930 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:35.188 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:35.188 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:35.188 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:35.188 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:35.447 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:35.447 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:35.447 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:35.447 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:35.447 [86/267] Linking static target lib/librte_ring.a 00:03:35.705 [87/267] Linking static target lib/librte_eal.a 00:03:35.705 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:35.705 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:35.705 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:35.705 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:35.965 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:35.965 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:35.965 [94/267] Linking static target lib/librte_mempool.a 00:03:35.965 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:35.965 [96/267] Linking static target lib/librte_rcu.a 00:03:35.965 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.224 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:36.224 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:36.224 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:36.483 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:36.483 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:36.483 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.743 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:36.743 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:36.743 [106/267] Linking static target lib/librte_net.a 00:03:36.743 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:36.743 [108/267] Linking static target lib/librte_meter.a 00:03:36.743 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:36.743 [110/267] Linking static target lib/librte_mbuf.a 00:03:37.001 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:37.001 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:37.001 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:37.001 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:37.001 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.258 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.258 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.258 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:37.517 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:37.517 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:37.783 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:37.783 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:37.783 [123/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.783 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:37.783 [125/267] Linking static target lib/librte_pci.a 00:03:38.041 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:38.041 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:38.041 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:38.041 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:38.041 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:38.041 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:38.041 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:38.300 [133/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.300 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:38.300 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:38.300 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:38.300 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:38.300 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:38.300 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:38.300 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:38.300 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:38.300 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:38.300 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:38.300 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:38.559 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:38.559 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:38.559 [147/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:38.559 [148/267] Linking static target lib/librte_cmdline.a 00:03:38.818 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:38.818 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:38.818 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:38.818 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:38.818 [153/267] Linking static target lib/librte_timer.a 00:03:39.076 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:39.076 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:39.076 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:39.076 [157/267] Linking static target lib/librte_compressdev.a 00:03:39.076 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:39.361 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:39.361 [160/267] Linking static target lib/librte_hash.a 00:03:39.361 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:39.361 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.361 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:39.361 [164/267] Linking static target lib/librte_ethdev.a 00:03:39.361 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:39.361 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:39.361 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:39.361 [168/267] Linking static target lib/librte_dmadev.a 00:03:39.684 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:39.684 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:39.684 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:39.684 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:39.944 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.944 [174/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:39.944 [175/267] Linking static target lib/librte_cryptodev.a 00:03:39.944 [176/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.944 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:39.944 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:39.944 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:40.203 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.203 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:40.203 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:40.203 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.461 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:40.461 [185/267] Linking static target lib/librte_power.a 00:03:40.461 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:40.461 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:40.720 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:40.720 [189/267] Linking static target lib/librte_security.a 00:03:40.720 [190/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:40.720 [191/267] Linking static target lib/librte_reorder.a 00:03:40.720 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:40.978 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:41.237 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.237 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:41.237 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.237 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:41.496 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:41.496 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.496 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:41.496 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:41.756 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:41.756 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:41.756 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:42.015 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:42.015 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:42.015 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:42.015 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:42.015 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:42.015 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.275 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:42.275 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:42.275 [213/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.275 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.275 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.275 [216/267] Linking static target drivers/librte_bus_pci.a 00:03:42.275 [217/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.275 [218/267] Linking static target drivers/librte_bus_vdev.a 00:03:42.275 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:42.275 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:42.535 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:42.535 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.535 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:42.535 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:42.535 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.535 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.102 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:44.038 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.038 [229/267] Linking target lib/librte_eal.so.24.1 00:03:44.297 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:44.297 [231/267] Linking target lib/librte_timer.so.24.1 00:03:44.297 [232/267] Linking target lib/librte_meter.so.24.1 00:03:44.297 [233/267] Linking target lib/librte_ring.so.24.1 00:03:44.297 [234/267] Linking target lib/librte_pci.so.24.1 00:03:44.297 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:44.297 [236/267] Linking target lib/librte_dmadev.so.24.1 00:03:44.297 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:44.297 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:44.297 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:44.297 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:44.556 [241/267] Linking target lib/librte_rcu.so.24.1 00:03:44.556 [242/267] Linking target lib/librte_mempool.so.24.1 00:03:44.556 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:44.556 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:44.556 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:44.556 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:44.556 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:44.556 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:44.815 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:44.815 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:03:44.815 [251/267] Linking target lib/librte_reorder.so.24.1 00:03:44.815 [252/267] Linking target lib/librte_net.so.24.1 00:03:44.815 [253/267] Linking target lib/librte_compressdev.so.24.1 00:03:44.815 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:44.815 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:44.815 [256/267] Linking target lib/librte_security.so.24.1 00:03:44.815 [257/267] Linking target lib/librte_cmdline.so.24.1 00:03:44.815 [258/267] Linking target lib/librte_hash.so.24.1 00:03:45.073 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:45.332 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.332 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:45.591 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:45.591 [263/267] Linking target lib/librte_power.so.24.1 00:03:46.158 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:46.158 [265/267] Linking static target lib/librte_vhost.a 00:03:47.534 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.534 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:47.534 INFO: autodetecting backend as ninja 00:03:47.534 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:05.736 CC lib/log/log.o 00:04:05.736 CC lib/ut/ut.o 00:04:05.736 CC lib/log/log_deprecated.o 00:04:05.736 CC lib/log/log_flags.o 00:04:05.736 CC lib/ut_mock/mock.o 00:04:05.736 LIB libspdk_ut_mock.a 00:04:05.736 LIB libspdk_ut.a 00:04:05.736 LIB libspdk_log.a 00:04:05.736 SO libspdk_ut_mock.so.6.0 00:04:05.736 SO libspdk_ut.so.2.0 00:04:05.736 SO libspdk_log.so.7.1 00:04:05.736 SYMLINK libspdk_ut.so 00:04:05.736 SYMLINK libspdk_ut_mock.so 00:04:05.736 SYMLINK libspdk_log.so 00:04:05.736 CC lib/ioat/ioat.o 00:04:05.736 CC lib/dma/dma.o 00:04:05.736 CC lib/util/base64.o 00:04:05.736 CXX lib/trace_parser/trace.o 00:04:05.736 CC lib/util/bit_array.o 00:04:05.736 CC lib/util/cpuset.o 00:04:05.736 CC lib/util/crc16.o 00:04:05.736 CC lib/util/crc32.o 00:04:05.736 CC lib/util/crc32c.o 00:04:05.736 CC lib/vfio_user/host/vfio_user_pci.o 00:04:05.736 CC lib/vfio_user/host/vfio_user.o 00:04:05.736 CC lib/util/crc32_ieee.o 00:04:05.736 CC lib/util/crc64.o 00:04:05.736 CC lib/util/dif.o 00:04:05.736 CC lib/util/fd.o 00:04:05.736 LIB libspdk_dma.a 00:04:05.736 CC lib/util/fd_group.o 00:04:05.736 LIB libspdk_ioat.a 00:04:05.736 CC lib/util/file.o 00:04:05.736 SO libspdk_dma.so.5.0 00:04:05.736 SO libspdk_ioat.so.7.0 00:04:05.736 CC lib/util/hexlify.o 00:04:05.736 SYMLINK libspdk_dma.so 00:04:05.736 CC lib/util/iov.o 00:04:05.736 SYMLINK libspdk_ioat.so 00:04:05.736 CC lib/util/math.o 00:04:05.736 CC lib/util/net.o 00:04:05.736 CC lib/util/pipe.o 00:04:05.736 LIB libspdk_vfio_user.a 00:04:05.736 CC lib/util/strerror_tls.o 00:04:05.736 SO libspdk_vfio_user.so.5.0 00:04:05.736 CC lib/util/string.o 00:04:05.736 CC lib/util/uuid.o 00:04:05.736 CC lib/util/xor.o 00:04:05.736 SYMLINK libspdk_vfio_user.so 00:04:05.736 CC lib/util/zipf.o 00:04:05.736 CC lib/util/md5.o 00:04:05.736 LIB libspdk_util.a 00:04:05.736 LIB libspdk_trace_parser.a 00:04:05.736 SO libspdk_util.so.10.1 00:04:05.736 SO libspdk_trace_parser.so.6.0 00:04:05.736 SYMLINK libspdk_trace_parser.so 00:04:05.736 SYMLINK libspdk_util.so 00:04:05.736 CC lib/rdma_utils/rdma_utils.o 00:04:05.736 CC lib/env_dpdk/env.o 00:04:05.736 CC lib/env_dpdk/memory.o 00:04:05.736 CC lib/env_dpdk/init.o 00:04:05.736 CC lib/env_dpdk/pci.o 00:04:05.736 CC lib/env_dpdk/threads.o 00:04:05.736 CC lib/conf/conf.o 00:04:05.736 CC lib/vmd/vmd.o 00:04:05.736 CC lib/idxd/idxd.o 00:04:05.736 CC lib/json/json_parse.o 00:04:05.736 CC lib/json/json_util.o 00:04:05.736 LIB libspdk_conf.a 00:04:05.736 SO libspdk_conf.so.6.0 00:04:05.736 CC lib/json/json_write.o 00:04:05.736 LIB libspdk_rdma_utils.a 00:04:05.736 CC lib/idxd/idxd_user.o 00:04:05.736 SO libspdk_rdma_utils.so.1.0 00:04:05.736 SYMLINK libspdk_conf.so 00:04:05.736 CC lib/idxd/idxd_kernel.o 00:04:05.736 SYMLINK libspdk_rdma_utils.so 00:04:05.736 CC lib/vmd/led.o 00:04:05.736 CC lib/env_dpdk/pci_ioat.o 00:04:05.736 CC lib/env_dpdk/pci_virtio.o 00:04:05.736 CC lib/env_dpdk/pci_vmd.o 00:04:05.736 CC lib/env_dpdk/pci_idxd.o 00:04:05.736 CC lib/env_dpdk/pci_event.o 00:04:05.736 CC lib/env_dpdk/sigbus_handler.o 00:04:05.736 LIB libspdk_json.a 00:04:05.736 CC lib/env_dpdk/pci_dpdk.o 00:04:05.736 LIB libspdk_idxd.a 00:04:05.736 LIB libspdk_vmd.a 00:04:05.736 SO libspdk_json.so.6.0 00:04:05.736 SO libspdk_idxd.so.12.1 00:04:05.736 SO libspdk_vmd.so.6.0 00:04:05.736 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:05.736 SYMLINK libspdk_json.so 00:04:05.736 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:05.736 SYMLINK libspdk_vmd.so 00:04:05.736 SYMLINK libspdk_idxd.so 00:04:05.736 CC lib/rdma_provider/common.o 00:04:05.736 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:05.994 LIB libspdk_rdma_provider.a 00:04:05.994 CC lib/jsonrpc/jsonrpc_server.o 00:04:05.994 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:05.994 CC lib/jsonrpc/jsonrpc_client.o 00:04:05.994 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:05.994 SO libspdk_rdma_provider.so.7.0 00:04:05.994 SYMLINK libspdk_rdma_provider.so 00:04:06.252 LIB libspdk_jsonrpc.a 00:04:06.252 LIB libspdk_env_dpdk.a 00:04:06.252 SO libspdk_jsonrpc.so.6.0 00:04:06.252 SYMLINK libspdk_jsonrpc.so 00:04:06.252 SO libspdk_env_dpdk.so.15.1 00:04:06.509 SYMLINK libspdk_env_dpdk.so 00:04:06.509 CC lib/rpc/rpc.o 00:04:06.767 LIB libspdk_rpc.a 00:04:06.767 SO libspdk_rpc.so.6.0 00:04:07.025 SYMLINK libspdk_rpc.so 00:04:07.025 CC lib/notify/notify.o 00:04:07.025 CC lib/notify/notify_rpc.o 00:04:07.025 CC lib/trace/trace_rpc.o 00:04:07.025 CC lib/trace/trace_flags.o 00:04:07.025 CC lib/trace/trace.o 00:04:07.025 CC lib/keyring/keyring.o 00:04:07.025 CC lib/keyring/keyring_rpc.o 00:04:07.282 LIB libspdk_notify.a 00:04:07.282 LIB libspdk_keyring.a 00:04:07.282 SO libspdk_notify.so.6.0 00:04:07.282 LIB libspdk_trace.a 00:04:07.282 SO libspdk_keyring.so.2.0 00:04:07.282 SYMLINK libspdk_notify.so 00:04:07.282 SO libspdk_trace.so.11.0 00:04:07.540 SYMLINK libspdk_keyring.so 00:04:07.540 SYMLINK libspdk_trace.so 00:04:07.798 CC lib/sock/sock.o 00:04:07.798 CC lib/thread/iobuf.o 00:04:07.798 CC lib/sock/sock_rpc.o 00:04:07.798 CC lib/thread/thread.o 00:04:08.057 LIB libspdk_sock.a 00:04:08.057 SO libspdk_sock.so.10.0 00:04:08.315 SYMLINK libspdk_sock.so 00:04:08.573 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:08.573 CC lib/nvme/nvme_ns.o 00:04:08.574 CC lib/nvme/nvme_ctrlr.o 00:04:08.574 CC lib/nvme/nvme_ns_cmd.o 00:04:08.574 CC lib/nvme/nvme_pcie_common.o 00:04:08.574 CC lib/nvme/nvme_fabric.o 00:04:08.574 CC lib/nvme/nvme.o 00:04:08.574 CC lib/nvme/nvme_qpair.o 00:04:08.574 CC lib/nvme/nvme_pcie.o 00:04:09.139 CC lib/nvme/nvme_quirks.o 00:04:09.140 CC lib/nvme/nvme_transport.o 00:04:09.140 CC lib/nvme/nvme_discovery.o 00:04:09.140 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:09.140 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:09.397 CC lib/nvme/nvme_tcp.o 00:04:09.397 LIB libspdk_thread.a 00:04:09.397 CC lib/nvme/nvme_opal.o 00:04:09.397 SO libspdk_thread.so.11.0 00:04:09.397 SYMLINK libspdk_thread.so 00:04:09.397 CC lib/nvme/nvme_io_msg.o 00:04:09.397 CC lib/nvme/nvme_poll_group.o 00:04:09.397 CC lib/nvme/nvme_zns.o 00:04:09.397 CC lib/nvme/nvme_stubs.o 00:04:09.655 CC lib/nvme/nvme_auth.o 00:04:09.655 CC lib/nvme/nvme_cuse.o 00:04:09.655 CC lib/nvme/nvme_rdma.o 00:04:10.228 CC lib/blob/blobstore.o 00:04:10.228 CC lib/accel/accel.o 00:04:10.228 CC lib/virtio/virtio.o 00:04:10.228 CC lib/init/json_config.o 00:04:10.228 CC lib/fsdev/fsdev.o 00:04:10.228 CC lib/fsdev/fsdev_io.o 00:04:10.488 CC lib/init/subsystem.o 00:04:10.488 CC lib/init/subsystem_rpc.o 00:04:10.488 CC lib/blob/request.o 00:04:10.488 CC lib/virtio/virtio_vhost_user.o 00:04:10.488 CC lib/blob/zeroes.o 00:04:10.488 CC lib/init/rpc.o 00:04:10.745 CC lib/blob/blob_bs_dev.o 00:04:10.745 CC lib/fsdev/fsdev_rpc.o 00:04:10.745 CC lib/virtio/virtio_vfio_user.o 00:04:10.745 LIB libspdk_init.a 00:04:10.745 CC lib/virtio/virtio_pci.o 00:04:10.745 CC lib/accel/accel_rpc.o 00:04:10.745 SO libspdk_init.so.6.0 00:04:10.745 LIB libspdk_fsdev.a 00:04:10.745 SYMLINK libspdk_init.so 00:04:10.745 CC lib/accel/accel_sw.o 00:04:10.745 SO libspdk_fsdev.so.2.0 00:04:11.004 SYMLINK libspdk_fsdev.so 00:04:11.004 LIB libspdk_virtio.a 00:04:11.004 SO libspdk_virtio.so.7.0 00:04:11.004 CC lib/event/app.o 00:04:11.004 CC lib/event/reactor.o 00:04:11.004 CC lib/event/log_rpc.o 00:04:11.004 CC lib/event/app_rpc.o 00:04:11.004 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:11.004 CC lib/event/scheduler_static.o 00:04:11.004 SYMLINK libspdk_virtio.so 00:04:11.262 LIB libspdk_nvme.a 00:04:11.262 SO libspdk_nvme.so.15.0 00:04:11.262 LIB libspdk_accel.a 00:04:11.521 SO libspdk_accel.so.16.0 00:04:11.521 SYMLINK libspdk_accel.so 00:04:11.521 LIB libspdk_event.a 00:04:11.521 SO libspdk_event.so.14.0 00:04:11.521 SYMLINK libspdk_event.so 00:04:11.521 SYMLINK libspdk_nvme.so 00:04:11.779 LIB libspdk_fuse_dispatcher.a 00:04:11.779 CC lib/bdev/part.o 00:04:11.779 CC lib/bdev/bdev_rpc.o 00:04:11.779 CC lib/bdev/bdev_zone.o 00:04:11.779 CC lib/bdev/scsi_nvme.o 00:04:11.779 CC lib/bdev/bdev.o 00:04:11.779 SO libspdk_fuse_dispatcher.so.1.0 00:04:11.779 SYMLINK libspdk_fuse_dispatcher.so 00:04:13.732 LIB libspdk_blob.a 00:04:13.732 SO libspdk_blob.so.12.0 00:04:13.732 SYMLINK libspdk_blob.so 00:04:13.732 CC lib/blobfs/blobfs.o 00:04:13.732 CC lib/blobfs/tree.o 00:04:13.732 CC lib/lvol/lvol.o 00:04:14.667 LIB libspdk_lvol.a 00:04:14.667 LIB libspdk_bdev.a 00:04:14.667 SO libspdk_lvol.so.11.0 00:04:14.667 SO libspdk_bdev.so.17.0 00:04:14.667 LIB libspdk_blobfs.a 00:04:14.667 SYMLINK libspdk_lvol.so 00:04:14.667 SO libspdk_blobfs.so.11.0 00:04:14.667 SYMLINK libspdk_bdev.so 00:04:14.925 SYMLINK libspdk_blobfs.so 00:04:14.925 CC lib/nvmf/ctrlr.o 00:04:14.925 CC lib/nvmf/ctrlr_bdev.o 00:04:14.925 CC lib/nvmf/subsystem.o 00:04:14.925 CC lib/nvmf/ctrlr_discovery.o 00:04:14.925 CC lib/ublk/ublk.o 00:04:14.925 CC lib/ublk/ublk_rpc.o 00:04:14.925 CC lib/nvmf/nvmf.o 00:04:14.925 CC lib/scsi/dev.o 00:04:14.925 CC lib/nbd/nbd.o 00:04:14.925 CC lib/ftl/ftl_core.o 00:04:15.183 CC lib/scsi/lun.o 00:04:15.183 CC lib/nbd/nbd_rpc.o 00:04:15.183 CC lib/nvmf/nvmf_rpc.o 00:04:15.441 CC lib/ftl/ftl_init.o 00:04:15.441 LIB libspdk_nbd.a 00:04:15.441 SO libspdk_nbd.so.7.0 00:04:15.441 CC lib/scsi/port.o 00:04:15.441 SYMLINK libspdk_nbd.so 00:04:15.441 CC lib/nvmf/transport.o 00:04:15.441 CC lib/nvmf/tcp.o 00:04:15.441 LIB libspdk_ublk.a 00:04:15.441 CC lib/nvmf/stubs.o 00:04:15.441 SO libspdk_ublk.so.3.0 00:04:15.441 CC lib/ftl/ftl_layout.o 00:04:15.441 CC lib/scsi/scsi.o 00:04:15.441 SYMLINK libspdk_ublk.so 00:04:15.441 CC lib/scsi/scsi_bdev.o 00:04:15.699 CC lib/nvmf/mdns_server.o 00:04:15.699 CC lib/nvmf/rdma.o 00:04:15.699 CC lib/ftl/ftl_debug.o 00:04:15.957 CC lib/nvmf/auth.o 00:04:15.957 CC lib/scsi/scsi_pr.o 00:04:15.957 CC lib/ftl/ftl_io.o 00:04:15.957 CC lib/scsi/scsi_rpc.o 00:04:15.957 CC lib/scsi/task.o 00:04:16.215 CC lib/ftl/ftl_sb.o 00:04:16.215 CC lib/ftl/ftl_l2p.o 00:04:16.216 CC lib/ftl/ftl_l2p_flat.o 00:04:16.216 CC lib/ftl/ftl_nv_cache.o 00:04:16.216 CC lib/ftl/ftl_band.o 00:04:16.216 LIB libspdk_scsi.a 00:04:16.216 CC lib/ftl/ftl_band_ops.o 00:04:16.216 SO libspdk_scsi.so.9.0 00:04:16.473 SYMLINK libspdk_scsi.so 00:04:16.473 CC lib/ftl/ftl_writer.o 00:04:16.473 CC lib/ftl/ftl_reloc.o 00:04:16.473 CC lib/ftl/ftl_rq.o 00:04:16.473 CC lib/ftl/ftl_l2p_cache.o 00:04:16.731 CC lib/ftl/ftl_p2l.o 00:04:16.731 CC lib/ftl/ftl_p2l_log.o 00:04:16.731 CC lib/ftl/mngt/ftl_mngt.o 00:04:16.731 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:16.731 CC lib/iscsi/conn.o 00:04:16.731 CC lib/vhost/vhost.o 00:04:16.989 CC lib/vhost/vhost_rpc.o 00:04:16.990 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:16.990 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:16.990 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:16.990 CC lib/vhost/vhost_scsi.o 00:04:16.990 CC lib/vhost/vhost_blk.o 00:04:16.990 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:16.990 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:17.247 CC lib/vhost/rte_vhost_user.o 00:04:17.247 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:17.247 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:17.247 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:17.247 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:17.247 CC lib/iscsi/init_grp.o 00:04:17.248 CC lib/iscsi/iscsi.o 00:04:17.506 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:17.506 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:17.506 CC lib/iscsi/param.o 00:04:17.506 CC lib/ftl/utils/ftl_conf.o 00:04:17.764 CC lib/iscsi/portal_grp.o 00:04:17.764 CC lib/iscsi/tgt_node.o 00:04:17.764 CC lib/ftl/utils/ftl_md.o 00:04:17.764 LIB libspdk_nvmf.a 00:04:17.764 CC lib/ftl/utils/ftl_mempool.o 00:04:17.764 CC lib/iscsi/iscsi_subsystem.o 00:04:17.764 CC lib/iscsi/iscsi_rpc.o 00:04:17.764 CC lib/iscsi/task.o 00:04:18.022 CC lib/ftl/utils/ftl_bitmap.o 00:04:18.022 SO libspdk_nvmf.so.20.0 00:04:18.022 CC lib/ftl/utils/ftl_property.o 00:04:18.022 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:18.022 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:18.022 SYMLINK libspdk_nvmf.so 00:04:18.022 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:18.022 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:18.280 LIB libspdk_vhost.a 00:04:18.280 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:18.280 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:18.280 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:18.280 SO libspdk_vhost.so.8.0 00:04:18.280 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:18.280 SYMLINK libspdk_vhost.so 00:04:18.280 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:18.280 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:18.280 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:18.280 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:18.280 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:18.280 CC lib/ftl/base/ftl_base_dev.o 00:04:18.280 CC lib/ftl/base/ftl_base_bdev.o 00:04:18.280 CC lib/ftl/ftl_trace.o 00:04:18.538 LIB libspdk_iscsi.a 00:04:18.538 SO libspdk_iscsi.so.8.0 00:04:18.538 LIB libspdk_ftl.a 00:04:18.799 SYMLINK libspdk_iscsi.so 00:04:18.799 SO libspdk_ftl.so.9.0 00:04:19.060 SYMLINK libspdk_ftl.so 00:04:19.320 CC module/env_dpdk/env_dpdk_rpc.o 00:04:19.320 CC module/scheduler/gscheduler/gscheduler.o 00:04:19.320 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:19.579 CC module/sock/posix/posix.o 00:04:19.579 CC module/keyring/file/keyring.o 00:04:19.579 CC module/blob/bdev/blob_bdev.o 00:04:19.579 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:19.579 CC module/accel/ioat/accel_ioat.o 00:04:19.579 CC module/accel/error/accel_error.o 00:04:19.579 CC module/fsdev/aio/fsdev_aio.o 00:04:19.579 LIB libspdk_env_dpdk_rpc.a 00:04:19.579 SO libspdk_env_dpdk_rpc.so.6.0 00:04:19.579 SYMLINK libspdk_env_dpdk_rpc.so 00:04:19.579 LIB libspdk_scheduler_gscheduler.a 00:04:19.579 LIB libspdk_scheduler_dpdk_governor.a 00:04:19.579 CC module/keyring/file/keyring_rpc.o 00:04:19.579 SO libspdk_scheduler_gscheduler.so.4.0 00:04:19.579 LIB libspdk_scheduler_dynamic.a 00:04:19.579 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:19.579 CC module/accel/error/accel_error_rpc.o 00:04:19.579 SO libspdk_scheduler_dynamic.so.4.0 00:04:19.579 CC module/accel/ioat/accel_ioat_rpc.o 00:04:19.579 SYMLINK libspdk_scheduler_gscheduler.so 00:04:19.579 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:19.579 SYMLINK libspdk_scheduler_dynamic.so 00:04:19.579 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:19.579 LIB libspdk_blob_bdev.a 00:04:19.837 SO libspdk_blob_bdev.so.12.0 00:04:19.837 LIB libspdk_keyring_file.a 00:04:19.837 CC module/accel/dsa/accel_dsa.o 00:04:19.837 SO libspdk_keyring_file.so.2.0 00:04:19.837 LIB libspdk_accel_error.a 00:04:19.837 LIB libspdk_accel_ioat.a 00:04:19.837 SYMLINK libspdk_blob_bdev.so 00:04:19.837 SO libspdk_accel_error.so.2.0 00:04:19.837 SO libspdk_accel_ioat.so.6.0 00:04:19.837 CC module/accel/dsa/accel_dsa_rpc.o 00:04:19.837 SYMLINK libspdk_keyring_file.so 00:04:19.837 CC module/fsdev/aio/linux_aio_mgr.o 00:04:19.837 CC module/accel/iaa/accel_iaa.o 00:04:19.837 CC module/keyring/linux/keyring.o 00:04:19.837 CC module/accel/iaa/accel_iaa_rpc.o 00:04:19.837 SYMLINK libspdk_accel_ioat.so 00:04:19.837 SYMLINK libspdk_accel_error.so 00:04:19.837 CC module/keyring/linux/keyring_rpc.o 00:04:20.095 LIB libspdk_accel_iaa.a 00:04:20.095 LIB libspdk_keyring_linux.a 00:04:20.095 SO libspdk_accel_iaa.so.3.0 00:04:20.095 SO libspdk_keyring_linux.so.1.0 00:04:20.095 LIB libspdk_accel_dsa.a 00:04:20.095 LIB libspdk_fsdev_aio.a 00:04:20.095 SO libspdk_accel_dsa.so.5.0 00:04:20.095 SYMLINK libspdk_keyring_linux.so 00:04:20.095 SYMLINK libspdk_accel_iaa.so 00:04:20.095 SO libspdk_fsdev_aio.so.1.0 00:04:20.095 SYMLINK libspdk_accel_dsa.so 00:04:20.095 SYMLINK libspdk_fsdev_aio.so 00:04:20.095 CC module/bdev/error/vbdev_error.o 00:04:20.095 CC module/bdev/lvol/vbdev_lvol.o 00:04:20.095 CC module/bdev/delay/vbdev_delay.o 00:04:20.095 CC module/bdev/gpt/gpt.o 00:04:20.095 CC module/blobfs/bdev/blobfs_bdev.o 00:04:20.353 CC module/bdev/malloc/bdev_malloc.o 00:04:20.353 LIB libspdk_sock_posix.a 00:04:20.353 CC module/bdev/null/bdev_null.o 00:04:20.353 CC module/bdev/nvme/bdev_nvme.o 00:04:20.353 SO libspdk_sock_posix.so.6.0 00:04:20.353 CC module/bdev/passthru/vbdev_passthru.o 00:04:20.353 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:20.353 CC module/bdev/gpt/vbdev_gpt.o 00:04:20.353 SYMLINK libspdk_sock_posix.so 00:04:20.353 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:20.353 CC module/bdev/error/vbdev_error_rpc.o 00:04:20.611 LIB libspdk_blobfs_bdev.a 00:04:20.611 CC module/bdev/null/bdev_null_rpc.o 00:04:20.611 SO libspdk_blobfs_bdev.so.6.0 00:04:20.611 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:20.611 LIB libspdk_bdev_error.a 00:04:20.611 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:20.611 SYMLINK libspdk_blobfs_bdev.so 00:04:20.611 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:20.611 SO libspdk_bdev_error.so.6.0 00:04:20.611 LIB libspdk_bdev_gpt.a 00:04:20.611 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:20.611 LIB libspdk_bdev_null.a 00:04:20.612 SO libspdk_bdev_gpt.so.6.0 00:04:20.612 SYMLINK libspdk_bdev_error.so 00:04:20.612 SO libspdk_bdev_null.so.6.0 00:04:20.612 LIB libspdk_bdev_delay.a 00:04:20.612 SYMLINK libspdk_bdev_gpt.so 00:04:20.612 SO libspdk_bdev_delay.so.6.0 00:04:20.612 SYMLINK libspdk_bdev_null.so 00:04:20.870 LIB libspdk_bdev_malloc.a 00:04:20.870 CC module/bdev/raid/bdev_raid.o 00:04:20.870 SO libspdk_bdev_malloc.so.6.0 00:04:20.870 LIB libspdk_bdev_passthru.a 00:04:20.870 SYMLINK libspdk_bdev_delay.so 00:04:20.870 CC module/bdev/split/vbdev_split.o 00:04:20.870 CC module/bdev/split/vbdev_split_rpc.o 00:04:20.870 SO libspdk_bdev_passthru.so.6.0 00:04:20.870 SYMLINK libspdk_bdev_malloc.so 00:04:20.870 CC module/bdev/nvme/nvme_rpc.o 00:04:20.870 SYMLINK libspdk_bdev_passthru.so 00:04:20.870 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:20.870 CC module/bdev/xnvme/bdev_xnvme.o 00:04:20.870 CC module/bdev/nvme/bdev_mdns_client.o 00:04:20.870 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:20.870 LIB libspdk_bdev_lvol.a 00:04:20.870 SO libspdk_bdev_lvol.so.6.0 00:04:20.870 CC module/bdev/aio/bdev_aio.o 00:04:21.129 LIB libspdk_bdev_split.a 00:04:21.130 SYMLINK libspdk_bdev_lvol.so 00:04:21.130 CC module/bdev/nvme/vbdev_opal.o 00:04:21.130 SO libspdk_bdev_split.so.6.0 00:04:21.130 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:21.130 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:21.130 SYMLINK libspdk_bdev_split.so 00:04:21.130 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:21.130 LIB libspdk_bdev_zone_block.a 00:04:21.130 LIB libspdk_bdev_xnvme.a 00:04:21.130 CC module/bdev/ftl/bdev_ftl.o 00:04:21.130 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:21.130 CC module/bdev/aio/bdev_aio_rpc.o 00:04:21.130 SO libspdk_bdev_zone_block.so.6.0 00:04:21.130 SO libspdk_bdev_xnvme.so.3.0 00:04:21.130 CC module/bdev/iscsi/bdev_iscsi.o 00:04:21.388 SYMLINK libspdk_bdev_zone_block.so 00:04:21.388 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:21.388 SYMLINK libspdk_bdev_xnvme.so 00:04:21.388 CC module/bdev/raid/bdev_raid_rpc.o 00:04:21.388 CC module/bdev/raid/bdev_raid_sb.o 00:04:21.388 LIB libspdk_bdev_aio.a 00:04:21.388 SO libspdk_bdev_aio.so.6.0 00:04:21.388 SYMLINK libspdk_bdev_aio.so 00:04:21.388 LIB libspdk_bdev_ftl.a 00:04:21.388 CC module/bdev/raid/raid0.o 00:04:21.388 CC module/bdev/raid/raid1.o 00:04:21.388 SO libspdk_bdev_ftl.so.6.0 00:04:21.388 CC module/bdev/raid/concat.o 00:04:21.646 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:21.646 SYMLINK libspdk_bdev_ftl.so 00:04:21.646 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:21.646 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:21.646 LIB libspdk_bdev_iscsi.a 00:04:21.646 SO libspdk_bdev_iscsi.so.6.0 00:04:21.646 SYMLINK libspdk_bdev_iscsi.so 00:04:21.904 LIB libspdk_bdev_raid.a 00:04:21.904 SO libspdk_bdev_raid.so.6.0 00:04:21.904 SYMLINK libspdk_bdev_raid.so 00:04:21.904 LIB libspdk_bdev_virtio.a 00:04:21.904 SO libspdk_bdev_virtio.so.6.0 00:04:22.162 SYMLINK libspdk_bdev_virtio.so 00:04:23.101 LIB libspdk_bdev_nvme.a 00:04:23.101 SO libspdk_bdev_nvme.so.7.1 00:04:23.101 SYMLINK libspdk_bdev_nvme.so 00:04:23.677 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:23.677 CC module/event/subsystems/vmd/vmd.o 00:04:23.677 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:23.677 CC module/event/subsystems/sock/sock.o 00:04:23.677 CC module/event/subsystems/keyring/keyring.o 00:04:23.677 CC module/event/subsystems/fsdev/fsdev.o 00:04:23.677 CC module/event/subsystems/iobuf/iobuf.o 00:04:23.677 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:23.677 CC module/event/subsystems/scheduler/scheduler.o 00:04:23.677 LIB libspdk_event_keyring.a 00:04:23.677 LIB libspdk_event_vhost_blk.a 00:04:23.677 LIB libspdk_event_sock.a 00:04:23.677 LIB libspdk_event_fsdev.a 00:04:23.677 LIB libspdk_event_vmd.a 00:04:23.677 SO libspdk_event_keyring.so.1.0 00:04:23.677 SO libspdk_event_vhost_blk.so.3.0 00:04:23.677 LIB libspdk_event_scheduler.a 00:04:23.677 LIB libspdk_event_iobuf.a 00:04:23.677 SO libspdk_event_fsdev.so.1.0 00:04:23.677 SO libspdk_event_sock.so.5.0 00:04:23.677 SO libspdk_event_vmd.so.6.0 00:04:23.677 SO libspdk_event_scheduler.so.4.0 00:04:23.677 SYMLINK libspdk_event_vhost_blk.so 00:04:23.677 SO libspdk_event_iobuf.so.3.0 00:04:23.677 SYMLINK libspdk_event_keyring.so 00:04:23.677 SYMLINK libspdk_event_sock.so 00:04:23.677 SYMLINK libspdk_event_fsdev.so 00:04:23.935 SYMLINK libspdk_event_vmd.so 00:04:23.935 SYMLINK libspdk_event_scheduler.so 00:04:23.935 SYMLINK libspdk_event_iobuf.so 00:04:24.193 CC module/event/subsystems/accel/accel.o 00:04:24.193 LIB libspdk_event_accel.a 00:04:24.193 SO libspdk_event_accel.so.6.0 00:04:24.450 SYMLINK libspdk_event_accel.so 00:04:24.450 CC module/event/subsystems/bdev/bdev.o 00:04:24.708 LIB libspdk_event_bdev.a 00:04:24.708 SO libspdk_event_bdev.so.6.0 00:04:24.708 SYMLINK libspdk_event_bdev.so 00:04:24.966 CC module/event/subsystems/nbd/nbd.o 00:04:24.966 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:24.966 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:24.966 CC module/event/subsystems/ublk/ublk.o 00:04:24.966 CC module/event/subsystems/scsi/scsi.o 00:04:25.224 LIB libspdk_event_nbd.a 00:04:25.224 LIB libspdk_event_ublk.a 00:04:25.224 SO libspdk_event_nbd.so.6.0 00:04:25.224 SO libspdk_event_ublk.so.3.0 00:04:25.224 LIB libspdk_event_scsi.a 00:04:25.224 SO libspdk_event_scsi.so.6.0 00:04:25.224 SYMLINK libspdk_event_nbd.so 00:04:25.224 SYMLINK libspdk_event_ublk.so 00:04:25.224 LIB libspdk_event_nvmf.a 00:04:25.224 SYMLINK libspdk_event_scsi.so 00:04:25.224 SO libspdk_event_nvmf.so.6.0 00:04:25.224 SYMLINK libspdk_event_nvmf.so 00:04:25.482 CC module/event/subsystems/iscsi/iscsi.o 00:04:25.482 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:25.482 LIB libspdk_event_iscsi.a 00:04:25.482 LIB libspdk_event_vhost_scsi.a 00:04:25.740 SO libspdk_event_iscsi.so.6.0 00:04:25.740 SO libspdk_event_vhost_scsi.so.3.0 00:04:25.740 SYMLINK libspdk_event_iscsi.so 00:04:25.740 SYMLINK libspdk_event_vhost_scsi.so 00:04:25.740 SO libspdk.so.6.0 00:04:25.740 SYMLINK libspdk.so 00:04:25.997 CC app/trace_record/trace_record.o 00:04:25.997 CC app/spdk_nvme_perf/perf.o 00:04:25.997 CXX app/trace/trace.o 00:04:25.997 CC app/spdk_lspci/spdk_lspci.o 00:04:25.997 CC app/iscsi_tgt/iscsi_tgt.o 00:04:25.997 CC app/nvmf_tgt/nvmf_main.o 00:04:25.997 CC app/spdk_tgt/spdk_tgt.o 00:04:25.997 CC examples/ioat/perf/perf.o 00:04:25.997 CC examples/util/zipf/zipf.o 00:04:25.997 CC test/thread/poller_perf/poller_perf.o 00:04:26.255 LINK spdk_lspci 00:04:26.255 LINK nvmf_tgt 00:04:26.255 LINK poller_perf 00:04:26.255 LINK iscsi_tgt 00:04:26.255 LINK zipf 00:04:26.255 LINK spdk_trace_record 00:04:26.255 LINK spdk_tgt 00:04:26.255 LINK ioat_perf 00:04:26.513 LINK spdk_trace 00:04:26.513 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:26.513 CC app/spdk_top/spdk_top.o 00:04:26.513 CC app/spdk_nvme_identify/identify.o 00:04:26.513 CC app/spdk_nvme_discover/discovery_aer.o 00:04:26.513 CC examples/ioat/verify/verify.o 00:04:26.513 CC app/spdk_dd/spdk_dd.o 00:04:26.513 CC test/dma/test_dma/test_dma.o 00:04:26.513 LINK interrupt_tgt 00:04:26.772 CC app/fio/nvme/fio_plugin.o 00:04:26.772 LINK spdk_nvme_discover 00:04:26.772 LINK verify 00:04:26.772 CC test/app/bdev_svc/bdev_svc.o 00:04:27.042 LINK bdev_svc 00:04:27.042 LINK spdk_nvme_perf 00:04:27.042 LINK spdk_dd 00:04:27.042 TEST_HEADER include/spdk/accel.h 00:04:27.042 TEST_HEADER include/spdk/accel_module.h 00:04:27.042 TEST_HEADER include/spdk/assert.h 00:04:27.042 TEST_HEADER include/spdk/barrier.h 00:04:27.042 TEST_HEADER include/spdk/base64.h 00:04:27.042 TEST_HEADER include/spdk/bdev.h 00:04:27.042 TEST_HEADER include/spdk/bdev_module.h 00:04:27.042 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.042 CC examples/thread/thread/thread_ex.o 00:04:27.042 TEST_HEADER include/spdk/bit_array.h 00:04:27.042 TEST_HEADER include/spdk/bit_pool.h 00:04:27.042 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.042 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.042 CC app/vhost/vhost.o 00:04:27.042 TEST_HEADER include/spdk/blobfs.h 00:04:27.042 TEST_HEADER include/spdk/blob.h 00:04:27.042 TEST_HEADER include/spdk/conf.h 00:04:27.042 TEST_HEADER include/spdk/config.h 00:04:27.042 TEST_HEADER include/spdk/cpuset.h 00:04:27.042 TEST_HEADER include/spdk/crc16.h 00:04:27.042 TEST_HEADER include/spdk/crc32.h 00:04:27.042 TEST_HEADER include/spdk/crc64.h 00:04:27.042 TEST_HEADER include/spdk/dif.h 00:04:27.042 TEST_HEADER include/spdk/dma.h 00:04:27.042 TEST_HEADER include/spdk/endian.h 00:04:27.042 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.042 TEST_HEADER include/spdk/env.h 00:04:27.042 TEST_HEADER include/spdk/event.h 00:04:27.042 TEST_HEADER include/spdk/fd_group.h 00:04:27.042 TEST_HEADER include/spdk/fd.h 00:04:27.042 TEST_HEADER include/spdk/file.h 00:04:27.042 TEST_HEADER include/spdk/fsdev.h 00:04:27.042 TEST_HEADER include/spdk/fsdev_module.h 00:04:27.042 TEST_HEADER include/spdk/ftl.h 00:04:27.042 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:27.042 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.042 TEST_HEADER include/spdk/hexlify.h 00:04:27.042 TEST_HEADER include/spdk/histogram_data.h 00:04:27.042 TEST_HEADER include/spdk/idxd.h 00:04:27.042 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.042 TEST_HEADER include/spdk/init.h 00:04:27.042 TEST_HEADER include/spdk/ioat.h 00:04:27.042 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.042 LINK test_dma 00:04:27.042 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.042 TEST_HEADER include/spdk/json.h 00:04:27.042 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.042 TEST_HEADER include/spdk/keyring.h 00:04:27.042 TEST_HEADER include/spdk/keyring_module.h 00:04:27.042 TEST_HEADER include/spdk/likely.h 00:04:27.042 TEST_HEADER include/spdk/log.h 00:04:27.042 TEST_HEADER include/spdk/lvol.h 00:04:27.042 TEST_HEADER include/spdk/md5.h 00:04:27.042 TEST_HEADER include/spdk/memory.h 00:04:27.042 TEST_HEADER include/spdk/mmio.h 00:04:27.042 TEST_HEADER include/spdk/nbd.h 00:04:27.042 TEST_HEADER include/spdk/net.h 00:04:27.042 TEST_HEADER include/spdk/notify.h 00:04:27.042 TEST_HEADER include/spdk/nvme.h 00:04:27.042 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.042 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.042 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.042 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.042 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.042 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.042 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.042 TEST_HEADER include/spdk/nvmf.h 00:04:27.042 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.042 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.042 TEST_HEADER include/spdk/opal.h 00:04:27.043 TEST_HEADER include/spdk/opal_spec.h 00:04:27.043 TEST_HEADER include/spdk/pci_ids.h 00:04:27.043 TEST_HEADER include/spdk/pipe.h 00:04:27.043 TEST_HEADER include/spdk/queue.h 00:04:27.043 TEST_HEADER include/spdk/reduce.h 00:04:27.043 TEST_HEADER include/spdk/rpc.h 00:04:27.043 TEST_HEADER include/spdk/scheduler.h 00:04:27.043 TEST_HEADER include/spdk/scsi.h 00:04:27.043 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.043 TEST_HEADER include/spdk/sock.h 00:04:27.043 TEST_HEADER include/spdk/stdinc.h 00:04:27.043 TEST_HEADER include/spdk/string.h 00:04:27.043 TEST_HEADER include/spdk/thread.h 00:04:27.043 TEST_HEADER include/spdk/trace.h 00:04:27.043 TEST_HEADER include/spdk/trace_parser.h 00:04:27.043 TEST_HEADER include/spdk/tree.h 00:04:27.043 TEST_HEADER include/spdk/ublk.h 00:04:27.043 TEST_HEADER include/spdk/util.h 00:04:27.043 TEST_HEADER include/spdk/uuid.h 00:04:27.043 TEST_HEADER include/spdk/version.h 00:04:27.043 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.043 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.043 TEST_HEADER include/spdk/vhost.h 00:04:27.306 TEST_HEADER include/spdk/vmd.h 00:04:27.306 TEST_HEADER include/spdk/xor.h 00:04:27.306 TEST_HEADER include/spdk/zipf.h 00:04:27.306 CXX test/cpp_headers/accel.o 00:04:27.306 CC app/fio/bdev/fio_plugin.o 00:04:27.306 LINK vhost 00:04:27.306 CXX test/cpp_headers/accel_module.o 00:04:27.306 LINK thread 00:04:27.306 LINK spdk_nvme 00:04:27.306 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:27.306 LINK spdk_nvme_identify 00:04:27.306 CXX test/cpp_headers/assert.o 00:04:27.306 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.306 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.306 CXX test/cpp_headers/barrier.o 00:04:27.600 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:27.600 LINK spdk_top 00:04:27.600 CXX test/cpp_headers/base64.o 00:04:27.600 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:27.600 CXX test/cpp_headers/bdev.o 00:04:27.600 CXX test/cpp_headers/bdev_module.o 00:04:27.600 CXX test/cpp_headers/bdev_zone.o 00:04:27.600 CC examples/sock/hello_world/hello_sock.o 00:04:27.600 LINK spdk_bdev 00:04:27.860 LINK nvme_fuzz 00:04:27.860 CC test/env/vtophys/vtophys.o 00:04:27.860 CXX test/cpp_headers/bit_array.o 00:04:27.860 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:27.860 LINK hello_sock 00:04:27.860 LINK vtophys 00:04:27.860 CC test/env/memory/memory_ut.o 00:04:27.860 CXX test/cpp_headers/bit_pool.o 00:04:27.860 LINK mem_callbacks 00:04:27.860 CC examples/vmd/lsvmd/lsvmd.o 00:04:27.860 CC test/env/pci/pci_ut.o 00:04:27.860 LINK vhost_fuzz 00:04:27.860 LINK env_dpdk_post_init 00:04:27.860 CXX test/cpp_headers/blob_bdev.o 00:04:28.119 CXX test/cpp_headers/blobfs_bdev.o 00:04:28.119 LINK lsvmd 00:04:28.119 CC examples/idxd/perf/perf.o 00:04:28.119 CC test/app/histogram_perf/histogram_perf.o 00:04:28.119 CXX test/cpp_headers/blobfs.o 00:04:28.119 CC test/app/jsoncat/jsoncat.o 00:04:28.119 CC test/event/event_perf/event_perf.o 00:04:28.119 CC test/event/reactor/reactor.o 00:04:28.376 CC examples/vmd/led/led.o 00:04:28.376 LINK jsoncat 00:04:28.376 LINK histogram_perf 00:04:28.376 CXX test/cpp_headers/blob.o 00:04:28.376 LINK event_perf 00:04:28.376 LINK pci_ut 00:04:28.376 LINK reactor 00:04:28.376 LINK led 00:04:28.376 CXX test/cpp_headers/conf.o 00:04:28.376 LINK idxd_perf 00:04:28.634 CXX test/cpp_headers/config.o 00:04:28.634 CC test/event/reactor_perf/reactor_perf.o 00:04:28.634 CC test/rpc_client/rpc_client_test.o 00:04:28.634 CXX test/cpp_headers/cpuset.o 00:04:28.634 CC test/nvme/aer/aer.o 00:04:28.634 LINK reactor_perf 00:04:28.634 CXX test/cpp_headers/crc16.o 00:04:28.634 LINK rpc_client_test 00:04:28.634 CC test/accel/dif/dif.o 00:04:28.893 CC test/blobfs/mkfs/mkfs.o 00:04:28.893 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:28.893 CC test/lvol/esnap/esnap.o 00:04:28.893 CXX test/cpp_headers/crc32.o 00:04:28.893 CC test/event/app_repeat/app_repeat.o 00:04:28.893 LINK aer 00:04:28.893 CC test/nvme/reset/reset.o 00:04:28.893 LINK mkfs 00:04:28.893 LINK memory_ut 00:04:28.893 CXX test/cpp_headers/crc64.o 00:04:29.150 LINK app_repeat 00:04:29.150 LINK hello_fsdev 00:04:29.151 CXX test/cpp_headers/dif.o 00:04:29.151 CC test/nvme/sgl/sgl.o 00:04:29.151 CXX test/cpp_headers/dma.o 00:04:29.151 CC test/app/stub/stub.o 00:04:29.151 LINK reset 00:04:29.151 CC test/event/scheduler/scheduler.o 00:04:29.408 LINK iscsi_fuzz 00:04:29.408 LINK dif 00:04:29.408 CXX test/cpp_headers/endian.o 00:04:29.408 CC test/nvme/e2edp/nvme_dp.o 00:04:29.408 LINK stub 00:04:29.408 LINK sgl 00:04:29.408 CC examples/accel/perf/accel_perf.o 00:04:29.408 CC test/nvme/overhead/overhead.o 00:04:29.408 CXX test/cpp_headers/env_dpdk.o 00:04:29.408 LINK scheduler 00:04:29.408 CXX test/cpp_headers/env.o 00:04:29.408 CXX test/cpp_headers/event.o 00:04:29.666 LINK nvme_dp 00:04:29.666 CXX test/cpp_headers/fd_group.o 00:04:29.666 CC test/nvme/err_injection/err_injection.o 00:04:29.666 CXX test/cpp_headers/fd.o 00:04:29.666 CXX test/cpp_headers/file.o 00:04:29.666 CXX test/cpp_headers/fsdev.o 00:04:29.666 CXX test/cpp_headers/fsdev_module.o 00:04:29.666 CC test/bdev/bdevio/bdevio.o 00:04:29.666 LINK overhead 00:04:29.666 CXX test/cpp_headers/ftl.o 00:04:29.666 LINK err_injection 00:04:29.666 CXX test/cpp_headers/fuse_dispatcher.o 00:04:29.924 CXX test/cpp_headers/gpt_spec.o 00:04:29.924 CXX test/cpp_headers/hexlify.o 00:04:29.924 CC test/nvme/startup/startup.o 00:04:29.924 LINK accel_perf 00:04:29.924 CC test/nvme/reserve/reserve.o 00:04:29.924 CC test/nvme/simple_copy/simple_copy.o 00:04:29.924 LINK bdevio 00:04:29.924 CC test/nvme/boot_partition/boot_partition.o 00:04:29.924 CC test/nvme/connect_stress/connect_stress.o 00:04:29.924 CXX test/cpp_headers/histogram_data.o 00:04:30.183 CC test/nvme/compliance/nvme_compliance.o 00:04:30.183 LINK startup 00:04:30.183 CXX test/cpp_headers/idxd.o 00:04:30.183 LINK boot_partition 00:04:30.183 LINK reserve 00:04:30.183 LINK connect_stress 00:04:30.183 LINK simple_copy 00:04:30.183 CXX test/cpp_headers/idxd_spec.o 00:04:30.183 CXX test/cpp_headers/init.o 00:04:30.183 CC examples/blob/hello_world/hello_blob.o 00:04:30.441 CXX test/cpp_headers/ioat.o 00:04:30.441 CXX test/cpp_headers/ioat_spec.o 00:04:30.441 CC examples/nvme/hello_world/hello_world.o 00:04:30.441 CXX test/cpp_headers/iscsi_spec.o 00:04:30.441 LINK nvme_compliance 00:04:30.441 CXX test/cpp_headers/json.o 00:04:30.441 CXX test/cpp_headers/jsonrpc.o 00:04:30.441 CXX test/cpp_headers/keyring.o 00:04:30.441 CXX test/cpp_headers/keyring_module.o 00:04:30.441 CXX test/cpp_headers/likely.o 00:04:30.441 CXX test/cpp_headers/log.o 00:04:30.441 LINK hello_blob 00:04:30.441 CXX test/cpp_headers/lvol.o 00:04:30.441 CXX test/cpp_headers/md5.o 00:04:30.441 CXX test/cpp_headers/memory.o 00:04:30.700 CC test/nvme/fused_ordering/fused_ordering.o 00:04:30.700 LINK hello_world 00:04:30.700 CXX test/cpp_headers/mmio.o 00:04:30.700 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:30.700 CXX test/cpp_headers/nbd.o 00:04:30.700 CXX test/cpp_headers/net.o 00:04:30.700 CC examples/blob/cli/blobcli.o 00:04:30.700 CC examples/nvme/reconnect/reconnect.o 00:04:30.700 LINK fused_ordering 00:04:30.700 CXX test/cpp_headers/notify.o 00:04:30.700 CC test/nvme/fdp/fdp.o 00:04:30.700 LINK doorbell_aers 00:04:30.700 CXX test/cpp_headers/nvme.o 00:04:30.700 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:30.700 CXX test/cpp_headers/nvme_intel.o 00:04:30.958 CXX test/cpp_headers/nvme_ocssd.o 00:04:30.958 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:30.958 CXX test/cpp_headers/nvme_spec.o 00:04:30.958 CXX test/cpp_headers/nvme_zns.o 00:04:30.958 CC test/nvme/cuse/cuse.o 00:04:30.958 CXX test/cpp_headers/nvmf_cmd.o 00:04:30.958 LINK fdp 00:04:31.216 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.216 LINK reconnect 00:04:31.216 LINK blobcli 00:04:31.216 CXX test/cpp_headers/nvmf.o 00:04:31.216 CXX test/cpp_headers/nvmf_spec.o 00:04:31.216 CXX test/cpp_headers/nvmf_transport.o 00:04:31.216 LINK nvme_manage 00:04:31.216 CXX test/cpp_headers/opal.o 00:04:31.216 CXX test/cpp_headers/opal_spec.o 00:04:31.216 CC examples/nvme/arbitration/arbitration.o 00:04:31.216 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.216 CXX test/cpp_headers/pci_ids.o 00:04:31.474 CC examples/nvme/hotplug/hotplug.o 00:04:31.474 CC examples/bdev/bdevperf/bdevperf.o 00:04:31.474 CXX test/cpp_headers/pipe.o 00:04:31.474 CXX test/cpp_headers/queue.o 00:04:31.474 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:31.474 CC examples/nvme/abort/abort.o 00:04:31.474 CXX test/cpp_headers/reduce.o 00:04:31.474 LINK hotplug 00:04:31.474 CXX test/cpp_headers/rpc.o 00:04:31.474 LINK hello_bdev 00:04:31.732 LINK arbitration 00:04:31.732 CXX test/cpp_headers/scheduler.o 00:04:31.732 LINK cmb_copy 00:04:31.732 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:31.732 CXX test/cpp_headers/scsi.o 00:04:31.732 CXX test/cpp_headers/scsi_spec.o 00:04:31.732 CXX test/cpp_headers/sock.o 00:04:31.732 LINK abort 00:04:31.732 CXX test/cpp_headers/stdinc.o 00:04:31.732 CXX test/cpp_headers/string.o 00:04:31.732 CXX test/cpp_headers/thread.o 00:04:31.989 LINK pmr_persistence 00:04:31.989 CXX test/cpp_headers/trace.o 00:04:31.989 CXX test/cpp_headers/trace_parser.o 00:04:31.989 CXX test/cpp_headers/tree.o 00:04:31.989 CXX test/cpp_headers/ublk.o 00:04:31.989 CXX test/cpp_headers/util.o 00:04:31.989 CXX test/cpp_headers/uuid.o 00:04:31.989 CXX test/cpp_headers/version.o 00:04:31.989 LINK cuse 00:04:31.989 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.989 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.989 CXX test/cpp_headers/vhost.o 00:04:31.989 CXX test/cpp_headers/vmd.o 00:04:31.989 CXX test/cpp_headers/xor.o 00:04:31.989 CXX test/cpp_headers/zipf.o 00:04:32.247 LINK bdevperf 00:04:32.815 CC examples/nvmf/nvmf/nvmf.o 00:04:33.073 LINK nvmf 00:04:34.446 LINK esnap 00:04:34.705 ************************************ 00:04:34.705 END TEST make 00:04:34.705 ************************************ 00:04:34.705 00:04:34.705 real 1m13.264s 00:04:34.705 user 6m34.637s 00:04:34.705 sys 1m15.504s 00:04:34.705 14:37:12 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:34.705 14:37:12 make -- common/autotest_common.sh@10 -- $ set +x 00:04:34.705 14:37:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:34.705 14:37:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:34.705 14:37:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:34.705 14:37:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.705 14:37:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:34.705 14:37:12 -- pm/common@44 -- $ pid=5065 00:04:34.705 14:37:12 -- pm/common@50 -- $ kill -TERM 5065 00:04:34.705 14:37:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.705 14:37:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:34.705 14:37:12 -- pm/common@44 -- $ pid=5066 00:04:34.705 14:37:12 -- pm/common@50 -- $ kill -TERM 5066 00:04:34.705 14:37:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:34.705 14:37:12 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:34.705 14:37:12 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.705 14:37:12 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.705 14:37:12 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.705 14:37:12 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.705 14:37:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.705 14:37:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.705 14:37:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.705 14:37:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.705 14:37:12 -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.705 14:37:12 -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.705 14:37:12 -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.705 14:37:12 -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.705 14:37:12 -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.705 14:37:12 -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.705 14:37:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.705 14:37:12 -- scripts/common.sh@344 -- # case "$op" in 00:04:34.705 14:37:12 -- scripts/common.sh@345 -- # : 1 00:04:34.705 14:37:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.705 14:37:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.705 14:37:12 -- scripts/common.sh@365 -- # decimal 1 00:04:34.705 14:37:12 -- scripts/common.sh@353 -- # local d=1 00:04:34.705 14:37:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.705 14:37:12 -- scripts/common.sh@355 -- # echo 1 00:04:34.705 14:37:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.705 14:37:12 -- scripts/common.sh@366 -- # decimal 2 00:04:34.705 14:37:12 -- scripts/common.sh@353 -- # local d=2 00:04:34.705 14:37:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.705 14:37:12 -- scripts/common.sh@355 -- # echo 2 00:04:34.705 14:37:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.705 14:37:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.705 14:37:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.705 14:37:12 -- scripts/common.sh@368 -- # return 0 00:04:34.705 14:37:12 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.705 14:37:12 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.705 --rc genhtml_branch_coverage=1 00:04:34.705 --rc genhtml_function_coverage=1 00:04:34.705 --rc genhtml_legend=1 00:04:34.705 --rc geninfo_all_blocks=1 00:04:34.705 --rc geninfo_unexecuted_blocks=1 00:04:34.705 00:04:34.705 ' 00:04:34.705 14:37:12 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.705 --rc genhtml_branch_coverage=1 00:04:34.705 --rc genhtml_function_coverage=1 00:04:34.705 --rc genhtml_legend=1 00:04:34.705 --rc geninfo_all_blocks=1 00:04:34.705 --rc geninfo_unexecuted_blocks=1 00:04:34.705 00:04:34.705 ' 00:04:34.705 14:37:12 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.705 --rc genhtml_branch_coverage=1 00:04:34.705 --rc genhtml_function_coverage=1 00:04:34.705 --rc genhtml_legend=1 00:04:34.705 --rc geninfo_all_blocks=1 00:04:34.705 --rc geninfo_unexecuted_blocks=1 00:04:34.705 00:04:34.705 ' 00:04:34.705 14:37:12 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.705 --rc genhtml_branch_coverage=1 00:04:34.705 --rc genhtml_function_coverage=1 00:04:34.705 --rc genhtml_legend=1 00:04:34.705 --rc geninfo_all_blocks=1 00:04:34.705 --rc geninfo_unexecuted_blocks=1 00:04:34.705 00:04:34.705 ' 00:04:34.705 14:37:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.705 14:37:12 -- nvmf/common.sh@7 -- # uname -s 00:04:34.705 14:37:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.705 14:37:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.705 14:37:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.705 14:37:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.705 14:37:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.705 14:37:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.705 14:37:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.705 14:37:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.705 14:37:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.705 14:37:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.705 14:37:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:889ec288-9bdb-4025-81ca-1ba3f773afd1 00:04:34.705 14:37:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=889ec288-9bdb-4025-81ca-1ba3f773afd1 00:04:34.705 14:37:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.705 14:37:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.705 14:37:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.705 14:37:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.705 14:37:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:34.705 14:37:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:34.705 14:37:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.705 14:37:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.705 14:37:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.705 14:37:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.705 14:37:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.705 14:37:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.705 14:37:12 -- paths/export.sh@5 -- # export PATH 00:04:34.705 14:37:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.705 14:37:12 -- nvmf/common.sh@51 -- # : 0 00:04:34.705 14:37:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:34.705 14:37:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:34.705 14:37:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.705 14:37:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.705 14:37:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.705 14:37:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:34.706 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:34.706 14:37:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:34.706 14:37:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:34.706 14:37:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:34.706 14:37:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:34.706 14:37:12 -- spdk/autotest.sh@32 -- # uname -s 00:04:34.963 14:37:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:34.963 14:37:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:34.963 14:37:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:34.963 14:37:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:34.963 14:37:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:34.963 14:37:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:34.963 14:37:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:34.963 14:37:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:34.963 14:37:12 -- spdk/autotest.sh@48 -- # udevadm_pid=55518 00:04:34.963 14:37:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:34.963 14:37:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:34.963 14:37:12 -- pm/common@17 -- # local monitor 00:04:34.963 14:37:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.963 14:37:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:34.963 14:37:12 -- pm/common@25 -- # sleep 1 00:04:34.963 14:37:12 -- pm/common@21 -- # date +%s 00:04:34.963 14:37:12 -- pm/common@21 -- # date +%s 00:04:34.963 14:37:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733755032 00:04:34.963 14:37:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733755032 00:04:34.963 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733755032_collect-cpu-load.pm.log 00:04:34.963 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733755032_collect-vmstat.pm.log 00:04:35.897 14:37:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:35.897 14:37:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:35.897 14:37:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.897 14:37:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.897 14:37:13 -- spdk/autotest.sh@59 -- # create_test_list 00:04:35.897 14:37:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:35.897 14:37:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.897 14:37:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:35.897 14:37:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:35.897 14:37:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:35.897 14:37:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:35.897 14:37:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:35.897 14:37:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:35.897 14:37:13 -- common/autotest_common.sh@1457 -- # uname 00:04:35.897 14:37:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:35.897 14:37:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:35.897 14:37:13 -- common/autotest_common.sh@1477 -- # uname 00:04:35.897 14:37:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:35.897 14:37:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:35.897 14:37:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:35.897 lcov: LCOV version 1.15 00:04:35.897 14:37:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:50.808 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:50.808 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:05.700 14:37:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:05.700 14:37:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.700 14:37:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.700 14:37:43 -- spdk/autotest.sh@78 -- # rm -f 00:05:05.700 14:37:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.266 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:06.266 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:06.266 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:06.266 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:06.266 14:37:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:06.266 14:37:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:06.266 14:37:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:06.266 14:37:44 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:06.266 14:37:44 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:06.266 14:37:44 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:06.266 14:37:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.266 14:37:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:06.266 14:37:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.266 14:37:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:06.266 14:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:06.267 14:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:05:06.267 14:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:05:06.267 14:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:05:06.267 14:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:05:06.267 14:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:06.267 14:37:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:05:06.267 14:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:05:06.267 14:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:06.267 14:37:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:06.267 14:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:06.267 14:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:06.267 14:37:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.267 14:37:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:05:06.267 14:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:06.267 14:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:06.267 14:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.267 14:37:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:06.267 14:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.267 14:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.267 14:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:06.267 14:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:06.267 14:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:06.267 No valid GPT data, bailing 00:05:06.267 14:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.267 14:37:44 -- scripts/common.sh@394 -- # pt= 00:05:06.267 14:37:44 -- scripts/common.sh@395 -- # return 1 00:05:06.267 14:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:06.267 1+0 records in 00:05:06.267 1+0 records out 00:05:06.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457297 s, 229 MB/s 00:05:06.267 14:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.267 14:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.267 14:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:05:06.267 14:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:05:06.267 14:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:06.267 No valid GPT data, bailing 00:05:06.267 14:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:06.267 14:37:44 -- scripts/common.sh@394 -- # pt= 00:05:06.267 14:37:44 -- scripts/common.sh@395 -- # return 1 00:05:06.267 14:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:06.267 1+0 records in 00:05:06.267 1+0 records out 00:05:06.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502159 s, 209 MB/s 00:05:06.267 14:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.267 14:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.267 14:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:05:06.267 14:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:05:06.267 14:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:06.267 No valid GPT data, bailing 00:05:06.267 14:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:06.267 14:37:44 -- scripts/common.sh@394 -- # pt= 00:05:06.267 14:37:44 -- scripts/common.sh@395 -- # return 1 00:05:06.267 14:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:06.267 1+0 records in 00:05:06.267 1+0 records out 00:05:06.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402975 s, 260 MB/s 00:05:06.267 14:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.267 14:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.267 14:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:06.267 14:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:06.267 14:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:06.267 No valid GPT data, bailing 00:05:06.267 14:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:06.525 14:37:44 -- scripts/common.sh@394 -- # pt= 00:05:06.525 14:37:44 -- scripts/common.sh@395 -- # return 1 00:05:06.525 14:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:06.525 1+0 records in 00:05:06.525 1+0 records out 00:05:06.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049209 s, 213 MB/s 00:05:06.526 14:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.526 14:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.526 14:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:06.526 14:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:06.526 14:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:06.526 No valid GPT data, bailing 00:05:06.526 14:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:06.526 14:37:44 -- scripts/common.sh@394 -- # pt= 00:05:06.526 14:37:44 -- scripts/common.sh@395 -- # return 1 00:05:06.526 14:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:06.526 1+0 records in 00:05:06.526 1+0 records out 00:05:06.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384602 s, 273 MB/s 00:05:06.526 14:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.526 14:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.526 14:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:06.526 14:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:06.526 14:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:06.526 No valid GPT data, bailing 00:05:06.526 14:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:06.526 14:37:44 -- scripts/common.sh@394 -- # pt= 00:05:06.526 14:37:44 -- scripts/common.sh@395 -- # return 1 00:05:06.526 14:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:06.526 1+0 records in 00:05:06.526 1+0 records out 00:05:06.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204179 s, 51.4 MB/s 00:05:06.526 14:37:44 -- spdk/autotest.sh@105 -- # sync 00:05:06.526 14:37:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:06.526 14:37:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:06.526 14:37:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:08.491 14:37:46 -- spdk/autotest.sh@111 -- # uname -s 00:05:08.491 14:37:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:08.491 14:37:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:08.491 14:37:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:08.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.013 Hugepages 00:05:09.013 node hugesize free / total 00:05:09.013 node0 1048576kB 0 / 0 00:05:09.013 node0 2048kB 0 / 0 00:05:09.013 00:05:09.013 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.275 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:09.275 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:09.275 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:09.275 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:09.533 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:09.533 14:37:47 -- spdk/autotest.sh@117 -- # uname -s 00:05:09.533 14:37:47 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:09.534 14:37:47 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:09.534 14:37:47 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.358 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.358 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.358 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.358 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.615 14:37:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:11.551 14:37:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:11.551 14:37:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:11.551 14:37:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.551 14:37:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:11.551 14:37:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.551 14:37:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.551 14:37:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.551 14:37:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:11.551 14:37:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.551 14:37:49 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:11.551 14:37:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:11.551 14:37:49 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.067 Waiting for block devices as requested 00:05:12.067 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.067 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.067 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.325 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.658 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:17.658 14:37:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:17.658 14:37:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:17.658 14:37:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:17.658 14:37:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1543 -- # continue 00:05:17.658 14:37:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:17.658 14:37:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1543 -- # continue 00:05:17.658 14:37:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:17.658 14:37:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1543 -- # continue 00:05:17.658 14:37:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:17.658 14:37:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:17.658 14:37:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:17.658 14:37:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:17.658 14:37:55 -- common/autotest_common.sh@1543 -- # continue 00:05:17.658 14:37:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:17.658 14:37:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.658 14:37:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.658 14:37:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:17.658 14:37:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.658 14:37:55 -- common/autotest_common.sh@10 -- # set +x 00:05:17.658 14:37:55 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.484 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.484 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.484 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.484 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.742 14:37:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:18.742 14:37:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.742 14:37:56 -- common/autotest_common.sh@10 -- # set +x 00:05:18.742 14:37:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:18.742 14:37:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:18.742 14:37:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:18.742 14:37:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:18.742 14:37:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:18.742 14:37:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:18.742 14:37:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:18.742 14:37:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:18.742 14:37:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:18.742 14:37:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:18.742 14:37:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.742 14:37:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:18.742 14:37:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:18.742 14:37:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:18.742 14:37:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:18.742 14:37:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:18.742 14:37:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.742 14:37:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:18.742 14:37:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.742 14:37:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:18.742 14:37:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.742 14:37:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:18.742 14:37:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:18.742 14:37:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.742 14:37:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:18.742 14:37:56 -- common/autotest_common.sh@1572 -- # return 0 00:05:18.742 14:37:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:18.742 14:37:56 -- common/autotest_common.sh@1580 -- # return 0 00:05:18.742 14:37:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:18.742 14:37:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:18.742 14:37:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:18.742 14:37:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:18.742 14:37:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:18.742 14:37:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.742 14:37:56 -- common/autotest_common.sh@10 -- # set +x 00:05:18.742 14:37:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:18.742 14:37:56 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:18.742 14:37:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.742 14:37:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.742 14:37:56 -- common/autotest_common.sh@10 -- # set +x 00:05:18.742 ************************************ 00:05:18.742 START TEST env 00:05:18.742 ************************************ 00:05:18.742 14:37:56 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:18.742 * Looking for test storage... 00:05:18.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:18.742 14:37:56 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.001 14:37:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.001 14:37:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.001 14:37:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.001 14:37:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.001 14:37:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.001 14:37:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.001 14:37:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.001 14:37:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.001 14:37:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.001 14:37:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.001 14:37:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.001 14:37:56 env -- scripts/common.sh@344 -- # case "$op" in 00:05:19.001 14:37:56 env -- scripts/common.sh@345 -- # : 1 00:05:19.001 14:37:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.001 14:37:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.001 14:37:56 env -- scripts/common.sh@365 -- # decimal 1 00:05:19.001 14:37:56 env -- scripts/common.sh@353 -- # local d=1 00:05:19.001 14:37:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.001 14:37:56 env -- scripts/common.sh@355 -- # echo 1 00:05:19.001 14:37:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.001 14:37:56 env -- scripts/common.sh@366 -- # decimal 2 00:05:19.001 14:37:56 env -- scripts/common.sh@353 -- # local d=2 00:05:19.001 14:37:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.001 14:37:56 env -- scripts/common.sh@355 -- # echo 2 00:05:19.001 14:37:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.001 14:37:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.001 14:37:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.001 14:37:56 env -- scripts/common.sh@368 -- # return 0 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.001 --rc genhtml_branch_coverage=1 00:05:19.001 --rc genhtml_function_coverage=1 00:05:19.001 --rc genhtml_legend=1 00:05:19.001 --rc geninfo_all_blocks=1 00:05:19.001 --rc geninfo_unexecuted_blocks=1 00:05:19.001 00:05:19.001 ' 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.001 --rc genhtml_branch_coverage=1 00:05:19.001 --rc genhtml_function_coverage=1 00:05:19.001 --rc genhtml_legend=1 00:05:19.001 --rc geninfo_all_blocks=1 00:05:19.001 --rc geninfo_unexecuted_blocks=1 00:05:19.001 00:05:19.001 ' 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.001 --rc genhtml_branch_coverage=1 00:05:19.001 --rc genhtml_function_coverage=1 00:05:19.001 --rc genhtml_legend=1 00:05:19.001 --rc geninfo_all_blocks=1 00:05:19.001 --rc geninfo_unexecuted_blocks=1 00:05:19.001 00:05:19.001 ' 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.001 --rc genhtml_branch_coverage=1 00:05:19.001 --rc genhtml_function_coverage=1 00:05:19.001 --rc genhtml_legend=1 00:05:19.001 --rc geninfo_all_blocks=1 00:05:19.001 --rc geninfo_unexecuted_blocks=1 00:05:19.001 00:05:19.001 ' 00:05:19.001 14:37:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.001 14:37:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.001 14:37:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.001 ************************************ 00:05:19.001 START TEST env_memory 00:05:19.001 ************************************ 00:05:19.001 14:37:56 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:19.001 00:05:19.001 00:05:19.001 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.001 http://cunit.sourceforge.net/ 00:05:19.001 00:05:19.001 00:05:19.001 Suite: memory 00:05:19.001 Test: alloc and free memory map ...[2024-12-09 14:37:57.002127] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.001 passed 00:05:19.001 Test: mem map translation ...[2024-12-09 14:37:57.043947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.001 [2024-12-09 14:37:57.044097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.001 [2024-12-09 14:37:57.044211] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.001 [2024-12-09 14:37:57.044231] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.001 passed 00:05:19.002 Test: mem map registration ...[2024-12-09 14:37:57.112593] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:19.002 [2024-12-09 14:37:57.112707] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:19.259 passed 00:05:19.259 Test: mem map adjacent registrations ...passed 00:05:19.259 00:05:19.259 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.259 suites 1 1 n/a 0 0 00:05:19.259 tests 4 4 4 0 0 00:05:19.259 asserts 152 152 152 0 n/a 00:05:19.259 00:05:19.259 Elapsed time = 0.238 seconds 00:05:19.259 00:05:19.259 real 0m0.275s 00:05:19.259 user 0m0.250s 00:05:19.259 sys 0m0.017s 00:05:19.259 14:37:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.259 14:37:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:19.259 ************************************ 00:05:19.259 END TEST env_memory 00:05:19.259 ************************************ 00:05:19.259 14:37:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:19.259 14:37:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.259 14:37:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.259 14:37:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.259 ************************************ 00:05:19.259 START TEST env_vtophys 00:05:19.259 ************************************ 00:05:19.259 14:37:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:19.259 EAL: lib.eal log level changed from notice to debug 00:05:19.259 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 1 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 2 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 3 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 4 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 5 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 6 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 7 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 8 as core 0 on socket 0 00:05:19.259 EAL: Detected lcore 9 as core 0 on socket 0 00:05:19.259 EAL: Maximum logical cores by configuration: 128 00:05:19.259 EAL: Detected CPU lcores: 10 00:05:19.259 EAL: Detected NUMA nodes: 1 00:05:19.259 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:19.259 EAL: Detected shared linkage of DPDK 00:05:19.259 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.259 EAL: Selected IOVA mode 'PA' 00:05:19.259 EAL: Probing VFIO support... 00:05:19.259 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:19.259 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:19.259 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.259 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.259 EAL: Setting up physically contiguous memory... 00:05:19.259 EAL: Setting maximum number of open files to 524288 00:05:19.259 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.259 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.259 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.259 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.259 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.259 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.259 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.259 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.259 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.259 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.259 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.259 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.259 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.259 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.260 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.260 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.260 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.260 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.260 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.260 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.260 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.260 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.260 EAL: Hugepages will be freed exactly as allocated. 00:05:19.260 EAL: No shared files mode enabled, IPC is disabled 00:05:19.260 EAL: No shared files mode enabled, IPC is disabled 00:05:19.517 EAL: TSC frequency is ~2600000 KHz 00:05:19.517 EAL: Main lcore 0 is ready (tid=7f65dc60fa40;cpuset=[0]) 00:05:19.517 EAL: Trying to obtain current memory policy. 00:05:19.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.517 EAL: Restoring previous memory policy: 0 00:05:19.517 EAL: request: mp_malloc_sync 00:05:19.517 EAL: No shared files mode enabled, IPC is disabled 00:05:19.517 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.517 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:19.517 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.517 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.517 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:19.517 00:05:19.517 00:05:19.517 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.517 http://cunit.sourceforge.net/ 00:05:19.517 00:05:19.517 00:05:19.517 Suite: components_suite 00:05:19.778 Test: vtophys_malloc_test ...passed 00:05:19.778 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.778 EAL: Restoring previous memory policy: 4 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.778 EAL: Trying to obtain current memory policy. 00:05:19.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.778 EAL: Restoring previous memory policy: 4 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.778 EAL: Trying to obtain current memory policy. 00:05:19.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.778 EAL: Restoring previous memory policy: 4 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.778 EAL: Trying to obtain current memory policy. 00:05:19.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.778 EAL: Restoring previous memory policy: 4 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.778 EAL: Trying to obtain current memory policy. 00:05:19.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.778 EAL: Restoring previous memory policy: 4 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.778 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.778 EAL: request: mp_malloc_sync 00:05:19.778 EAL: No shared files mode enabled, IPC is disabled 00:05:19.778 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.038 EAL: Trying to obtain current memory policy. 00:05:20.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.038 EAL: Restoring previous memory policy: 4 00:05:20.038 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.038 EAL: request: mp_malloc_sync 00:05:20.038 EAL: No shared files mode enabled, IPC is disabled 00:05:20.038 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.038 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.038 EAL: request: mp_malloc_sync 00:05:20.038 EAL: No shared files mode enabled, IPC is disabled 00:05:20.038 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.038 EAL: Trying to obtain current memory policy. 00:05:20.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.038 EAL: Restoring previous memory policy: 4 00:05:20.038 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.038 EAL: request: mp_malloc_sync 00:05:20.038 EAL: No shared files mode enabled, IPC is disabled 00:05:20.038 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.299 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.299 EAL: request: mp_malloc_sync 00:05:20.299 EAL: No shared files mode enabled, IPC is disabled 00:05:20.299 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.299 EAL: Trying to obtain current memory policy. 00:05:20.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.560 EAL: Restoring previous memory policy: 4 00:05:20.560 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.560 EAL: request: mp_malloc_sync 00:05:20.560 EAL: No shared files mode enabled, IPC is disabled 00:05:20.560 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.821 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.821 EAL: request: mp_malloc_sync 00:05:20.821 EAL: No shared files mode enabled, IPC is disabled 00:05:20.821 EAL: Heap on socket 0 was shrunk by 258MB 00:05:21.081 EAL: Trying to obtain current memory policy. 00:05:21.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.081 EAL: Restoring previous memory policy: 4 00:05:21.081 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.081 EAL: request: mp_malloc_sync 00:05:21.081 EAL: No shared files mode enabled, IPC is disabled 00:05:21.081 EAL: Heap on socket 0 was expanded by 514MB 00:05:21.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.652 EAL: request: mp_malloc_sync 00:05:21.652 EAL: No shared files mode enabled, IPC is disabled 00:05:21.652 EAL: Heap on socket 0 was shrunk by 514MB 00:05:22.223 EAL: Trying to obtain current memory policy. 00:05:22.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.480 EAL: Restoring previous memory policy: 4 00:05:22.480 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.480 EAL: request: mp_malloc_sync 00:05:22.480 EAL: No shared files mode enabled, IPC is disabled 00:05:22.480 EAL: Heap on socket 0 was expanded by 1026MB 00:05:23.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.862 EAL: request: mp_malloc_sync 00:05:23.862 EAL: No shared files mode enabled, IPC is disabled 00:05:23.862 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:24.800 passed 00:05:24.801 00:05:24.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.801 suites 1 1 n/a 0 0 00:05:24.801 tests 2 2 2 0 0 00:05:24.801 asserts 5964 5964 5964 0 n/a 00:05:24.801 00:05:24.801 Elapsed time = 5.082 seconds 00:05:24.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.801 EAL: request: mp_malloc_sync 00:05:24.801 EAL: No shared files mode enabled, IPC is disabled 00:05:24.801 EAL: Heap on socket 0 was shrunk by 2MB 00:05:24.801 EAL: No shared files mode enabled, IPC is disabled 00:05:24.801 EAL: No shared files mode enabled, IPC is disabled 00:05:24.801 EAL: No shared files mode enabled, IPC is disabled 00:05:24.801 00:05:24.801 real 0m5.354s 00:05:24.801 user 0m4.555s 00:05:24.801 sys 0m0.645s 00:05:24.801 14:38:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.801 ************************************ 00:05:24.801 END TEST env_vtophys 00:05:24.801 ************************************ 00:05:24.801 14:38:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:24.801 14:38:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:24.801 14:38:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.801 14:38:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.801 14:38:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.801 ************************************ 00:05:24.801 START TEST env_pci 00:05:24.801 ************************************ 00:05:24.801 14:38:02 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:24.801 00:05:24.801 00:05:24.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.801 http://cunit.sourceforge.net/ 00:05:24.801 00:05:24.801 00:05:24.801 Suite: pci 00:05:24.801 Test: pci_hook ...[2024-12-09 14:38:02.697601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58275 has claimed it 00:05:24.801 passed 00:05:24.801 00:05:24.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.801 suites 1 1 n/a 0 0 00:05:24.801 tests 1 1 1 0 0 00:05:24.801 asserts 25 25 25 0 n/a 00:05:24.801 00:05:24.801 Elapsed time = 0.005 seconds 00:05:24.801 EAL: Cannot find device (10000:00:01.0) 00:05:24.801 EAL: Failed to attach device on primary process 00:05:24.801 00:05:24.801 real 0m0.059s 00:05:24.801 user 0m0.027s 00:05:24.801 sys 0m0.032s 00:05:24.801 14:38:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.801 ************************************ 00:05:24.801 END TEST env_pci 00:05:24.801 ************************************ 00:05:24.801 14:38:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:24.801 14:38:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.801 14:38:02 env -- env/env.sh@15 -- # uname 00:05:24.801 14:38:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.801 14:38:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.801 14:38:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.801 14:38:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:24.801 14:38:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.801 14:38:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.801 ************************************ 00:05:24.801 START TEST env_dpdk_post_init 00:05:24.801 ************************************ 00:05:24.801 14:38:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.801 EAL: Detected CPU lcores: 10 00:05:24.801 EAL: Detected NUMA nodes: 1 00:05:24.801 EAL: Detected shared linkage of DPDK 00:05:24.801 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.801 EAL: Selected IOVA mode 'PA' 00:05:25.059 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.059 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:25.059 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:25.059 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:25.059 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:25.059 Starting DPDK initialization... 00:05:25.059 Starting SPDK post initialization... 00:05:25.059 SPDK NVMe probe 00:05:25.059 Attaching to 0000:00:10.0 00:05:25.059 Attaching to 0000:00:11.0 00:05:25.059 Attaching to 0000:00:12.0 00:05:25.059 Attaching to 0000:00:13.0 00:05:25.059 Attached to 0000:00:10.0 00:05:25.059 Attached to 0000:00:11.0 00:05:25.059 Attached to 0000:00:13.0 00:05:25.059 Attached to 0000:00:12.0 00:05:25.059 Cleaning up... 00:05:25.059 ************************************ 00:05:25.059 END TEST env_dpdk_post_init 00:05:25.059 ************************************ 00:05:25.059 00:05:25.059 real 0m0.244s 00:05:25.059 user 0m0.079s 00:05:25.059 sys 0m0.067s 00:05:25.059 14:38:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.059 14:38:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.059 14:38:03 env -- env/env.sh@26 -- # uname 00:05:25.059 14:38:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.059 14:38:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.059 14:38:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.059 14:38:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.059 14:38:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.059 ************************************ 00:05:25.059 START TEST env_mem_callbacks 00:05:25.059 ************************************ 00:05:25.059 14:38:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.059 EAL: Detected CPU lcores: 10 00:05:25.059 EAL: Detected NUMA nodes: 1 00:05:25.059 EAL: Detected shared linkage of DPDK 00:05:25.059 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.059 EAL: Selected IOVA mode 'PA' 00:05:25.317 00:05:25.317 00:05:25.317 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.317 http://cunit.sourceforge.net/ 00:05:25.317 00:05:25.317 00:05:25.317 Suite: memory 00:05:25.317 Test: test ... 00:05:25.317 register 0x200000200000 2097152 00:05:25.317 malloc 3145728 00:05:25.317 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.317 register 0x200000400000 4194304 00:05:25.317 buf 0x2000004fffc0 len 3145728 PASSED 00:05:25.317 malloc 64 00:05:25.317 buf 0x2000004ffec0 len 64 PASSED 00:05:25.317 malloc 4194304 00:05:25.317 register 0x200000800000 6291456 00:05:25.317 buf 0x2000009fffc0 len 4194304 PASSED 00:05:25.317 free 0x2000004fffc0 3145728 00:05:25.317 free 0x2000004ffec0 64 00:05:25.317 unregister 0x200000400000 4194304 PASSED 00:05:25.317 free 0x2000009fffc0 4194304 00:05:25.317 unregister 0x200000800000 6291456 PASSED 00:05:25.317 malloc 8388608 00:05:25.317 register 0x200000400000 10485760 00:05:25.317 buf 0x2000005fffc0 len 8388608 PASSED 00:05:25.317 free 0x2000005fffc0 8388608 00:05:25.317 unregister 0x200000400000 10485760 PASSED 00:05:25.317 passed 00:05:25.317 00:05:25.317 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.317 suites 1 1 n/a 0 0 00:05:25.317 tests 1 1 1 0 0 00:05:25.317 asserts 15 15 15 0 n/a 00:05:25.317 00:05:25.317 Elapsed time = 0.047 seconds 00:05:25.317 00:05:25.317 real 0m0.213s 00:05:25.317 user 0m0.063s 00:05:25.317 sys 0m0.047s 00:05:25.317 14:38:03 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.317 14:38:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:25.317 ************************************ 00:05:25.317 END TEST env_mem_callbacks 00:05:25.317 ************************************ 00:05:25.317 ************************************ 00:05:25.317 END TEST env 00:05:25.317 ************************************ 00:05:25.317 00:05:25.317 real 0m6.532s 00:05:25.317 user 0m5.144s 00:05:25.317 sys 0m1.010s 00:05:25.317 14:38:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.317 14:38:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.317 14:38:03 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:25.317 14:38:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.317 14:38:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.317 14:38:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.317 ************************************ 00:05:25.317 START TEST rpc 00:05:25.317 ************************************ 00:05:25.317 14:38:03 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:25.317 * Looking for test storage... 00:05:25.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.317 14:38:03 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.317 14:38:03 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.317 14:38:03 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.575 14:38:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.575 14:38:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.575 14:38:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.575 14:38:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.575 14:38:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.575 14:38:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.575 14:38:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.575 14:38:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.575 14:38:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.575 14:38:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.575 14:38:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.575 14:38:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:25.575 14:38:03 rpc -- scripts/common.sh@345 -- # : 1 00:05:25.575 14:38:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.575 14:38:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.575 14:38:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:25.575 14:38:03 rpc -- scripts/common.sh@353 -- # local d=1 00:05:25.575 14:38:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.575 14:38:03 rpc -- scripts/common.sh@355 -- # echo 1 00:05:25.575 14:38:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.575 14:38:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:25.575 14:38:03 rpc -- scripts/common.sh@353 -- # local d=2 00:05:25.575 14:38:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.575 14:38:03 rpc -- scripts/common.sh@355 -- # echo 2 00:05:25.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.575 14:38:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.575 14:38:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.575 14:38:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.575 14:38:03 rpc -- scripts/common.sh@368 -- # return 0 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.575 --rc genhtml_branch_coverage=1 00:05:25.575 --rc genhtml_function_coverage=1 00:05:25.575 --rc genhtml_legend=1 00:05:25.575 --rc geninfo_all_blocks=1 00:05:25.575 --rc geninfo_unexecuted_blocks=1 00:05:25.575 00:05:25.575 ' 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.575 --rc genhtml_branch_coverage=1 00:05:25.575 --rc genhtml_function_coverage=1 00:05:25.575 --rc genhtml_legend=1 00:05:25.575 --rc geninfo_all_blocks=1 00:05:25.575 --rc geninfo_unexecuted_blocks=1 00:05:25.575 00:05:25.575 ' 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.575 --rc genhtml_branch_coverage=1 00:05:25.575 --rc genhtml_function_coverage=1 00:05:25.575 --rc genhtml_legend=1 00:05:25.575 --rc geninfo_all_blocks=1 00:05:25.575 --rc geninfo_unexecuted_blocks=1 00:05:25.575 00:05:25.575 ' 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.575 --rc genhtml_branch_coverage=1 00:05:25.575 --rc genhtml_function_coverage=1 00:05:25.575 --rc genhtml_legend=1 00:05:25.575 --rc geninfo_all_blocks=1 00:05:25.575 --rc geninfo_unexecuted_blocks=1 00:05:25.575 00:05:25.575 ' 00:05:25.575 14:38:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58402 00:05:25.575 14:38:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.575 14:38:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58402 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 58402 ']' 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.575 14:38:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.575 14:38:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.576 14:38:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.576 [2024-12-09 14:38:03.581340] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:25.576 [2024-12-09 14:38:03.581466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58402 ] 00:05:25.834 [2024-12-09 14:38:03.743300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.834 [2024-12-09 14:38:03.853154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.834 [2024-12-09 14:38:03.853215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58402' to capture a snapshot of events at runtime. 00:05:25.834 [2024-12-09 14:38:03.853226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.834 [2024-12-09 14:38:03.853237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.834 [2024-12-09 14:38:03.853245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58402 for offline analysis/debug. 00:05:25.834 [2024-12-09 14:38:03.854142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.399 14:38:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.399 14:38:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.399 14:38:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.399 14:38:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.399 14:38:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.399 14:38:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.399 14:38:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.399 14:38:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.399 14:38:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.399 ************************************ 00:05:26.399 START TEST rpc_integrity 00:05:26.399 ************************************ 00:05:26.399 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:26.399 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.399 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.399 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.657 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.657 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.657 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.657 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.657 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.657 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.657 { 00:05:26.657 "name": "Malloc0", 00:05:26.657 "aliases": [ 00:05:26.657 "c5640a62-2183-4363-be4e-81869b730c38" 00:05:26.657 ], 00:05:26.657 "product_name": "Malloc disk", 00:05:26.657 "block_size": 512, 00:05:26.657 "num_blocks": 16384, 00:05:26.657 "uuid": "c5640a62-2183-4363-be4e-81869b730c38", 00:05:26.657 "assigned_rate_limits": { 00:05:26.657 "rw_ios_per_sec": 0, 00:05:26.657 "rw_mbytes_per_sec": 0, 00:05:26.657 "r_mbytes_per_sec": 0, 00:05:26.657 "w_mbytes_per_sec": 0 00:05:26.657 }, 00:05:26.657 "claimed": false, 00:05:26.657 "zoned": false, 00:05:26.657 "supported_io_types": { 00:05:26.657 "read": true, 00:05:26.657 "write": true, 00:05:26.657 "unmap": true, 00:05:26.657 "flush": true, 00:05:26.657 "reset": true, 00:05:26.657 "nvme_admin": false, 00:05:26.657 "nvme_io": false, 00:05:26.657 "nvme_io_md": false, 00:05:26.657 "write_zeroes": true, 00:05:26.657 "zcopy": true, 00:05:26.657 "get_zone_info": false, 00:05:26.657 "zone_management": false, 00:05:26.657 "zone_append": false, 00:05:26.657 "compare": false, 00:05:26.657 "compare_and_write": false, 00:05:26.657 "abort": true, 00:05:26.657 "seek_hole": false, 00:05:26.657 "seek_data": false, 00:05:26.657 "copy": true, 00:05:26.657 "nvme_iov_md": false 00:05:26.657 }, 00:05:26.657 "memory_domains": [ 00:05:26.657 { 00:05:26.657 "dma_device_id": "system", 00:05:26.657 "dma_device_type": 1 00:05:26.658 }, 00:05:26.658 { 00:05:26.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.658 "dma_device_type": 2 00:05:26.658 } 00:05:26.658 ], 00:05:26.658 "driver_specific": {} 00:05:26.658 } 00:05:26.658 ]' 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.658 [2024-12-09 14:38:04.627973] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.658 [2024-12-09 14:38:04.628035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.658 [2024-12-09 14:38:04.628063] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:26.658 [2024-12-09 14:38:04.628075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.658 [2024-12-09 14:38:04.630672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.658 [2024-12-09 14:38:04.630721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.658 Passthru0 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.658 { 00:05:26.658 "name": "Malloc0", 00:05:26.658 "aliases": [ 00:05:26.658 "c5640a62-2183-4363-be4e-81869b730c38" 00:05:26.658 ], 00:05:26.658 "product_name": "Malloc disk", 00:05:26.658 "block_size": 512, 00:05:26.658 "num_blocks": 16384, 00:05:26.658 "uuid": "c5640a62-2183-4363-be4e-81869b730c38", 00:05:26.658 "assigned_rate_limits": { 00:05:26.658 "rw_ios_per_sec": 0, 00:05:26.658 "rw_mbytes_per_sec": 0, 00:05:26.658 "r_mbytes_per_sec": 0, 00:05:26.658 "w_mbytes_per_sec": 0 00:05:26.658 }, 00:05:26.658 "claimed": true, 00:05:26.658 "claim_type": "exclusive_write", 00:05:26.658 "zoned": false, 00:05:26.658 "supported_io_types": { 00:05:26.658 "read": true, 00:05:26.658 "write": true, 00:05:26.658 "unmap": true, 00:05:26.658 "flush": true, 00:05:26.658 "reset": true, 00:05:26.658 "nvme_admin": false, 00:05:26.658 "nvme_io": false, 00:05:26.658 "nvme_io_md": false, 00:05:26.658 "write_zeroes": true, 00:05:26.658 "zcopy": true, 00:05:26.658 "get_zone_info": false, 00:05:26.658 "zone_management": false, 00:05:26.658 "zone_append": false, 00:05:26.658 "compare": false, 00:05:26.658 "compare_and_write": false, 00:05:26.658 "abort": true, 00:05:26.658 "seek_hole": false, 00:05:26.658 "seek_data": false, 00:05:26.658 "copy": true, 00:05:26.658 "nvme_iov_md": false 00:05:26.658 }, 00:05:26.658 "memory_domains": [ 00:05:26.658 { 00:05:26.658 "dma_device_id": "system", 00:05:26.658 "dma_device_type": 1 00:05:26.658 }, 00:05:26.658 { 00:05:26.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.658 "dma_device_type": 2 00:05:26.658 } 00:05:26.658 ], 00:05:26.658 "driver_specific": {} 00:05:26.658 }, 00:05:26.658 { 00:05:26.658 "name": "Passthru0", 00:05:26.658 "aliases": [ 00:05:26.658 "0b736d22-c3d2-500b-8e92-7d112e160737" 00:05:26.658 ], 00:05:26.658 "product_name": "passthru", 00:05:26.658 "block_size": 512, 00:05:26.658 "num_blocks": 16384, 00:05:26.658 "uuid": "0b736d22-c3d2-500b-8e92-7d112e160737", 00:05:26.658 "assigned_rate_limits": { 00:05:26.658 "rw_ios_per_sec": 0, 00:05:26.658 "rw_mbytes_per_sec": 0, 00:05:26.658 "r_mbytes_per_sec": 0, 00:05:26.658 "w_mbytes_per_sec": 0 00:05:26.658 }, 00:05:26.658 "claimed": false, 00:05:26.658 "zoned": false, 00:05:26.658 "supported_io_types": { 00:05:26.658 "read": true, 00:05:26.658 "write": true, 00:05:26.658 "unmap": true, 00:05:26.658 "flush": true, 00:05:26.658 "reset": true, 00:05:26.658 "nvme_admin": false, 00:05:26.658 "nvme_io": false, 00:05:26.658 "nvme_io_md": false, 00:05:26.658 "write_zeroes": true, 00:05:26.658 "zcopy": true, 00:05:26.658 "get_zone_info": false, 00:05:26.658 "zone_management": false, 00:05:26.658 "zone_append": false, 00:05:26.658 "compare": false, 00:05:26.658 "compare_and_write": false, 00:05:26.658 "abort": true, 00:05:26.658 "seek_hole": false, 00:05:26.658 "seek_data": false, 00:05:26.658 "copy": true, 00:05:26.658 "nvme_iov_md": false 00:05:26.658 }, 00:05:26.658 "memory_domains": [ 00:05:26.658 { 00:05:26.658 "dma_device_id": "system", 00:05:26.658 "dma_device_type": 1 00:05:26.658 }, 00:05:26.658 { 00:05:26.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.658 "dma_device_type": 2 00:05:26.658 } 00:05:26.658 ], 00:05:26.658 "driver_specific": { 00:05:26.658 "passthru": { 00:05:26.658 "name": "Passthru0", 00:05:26.658 "base_bdev_name": "Malloc0" 00:05:26.658 } 00:05:26.658 } 00:05:26.658 } 00:05:26.658 ]' 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.658 ************************************ 00:05:26.658 END TEST rpc_integrity 00:05:26.658 ************************************ 00:05:26.658 14:38:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.658 00:05:26.658 real 0m0.245s 00:05:26.658 user 0m0.129s 00:05:26.658 sys 0m0.034s 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.658 14:38:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.916 14:38:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.916 14:38:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.916 14:38:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.916 14:38:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.916 ************************************ 00:05:26.916 START TEST rpc_plugins 00:05:26.916 ************************************ 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.916 { 00:05:26.916 "name": "Malloc1", 00:05:26.916 "aliases": [ 00:05:26.916 "05b4836d-7bda-4325-b0f3-e5b36e2897ed" 00:05:26.916 ], 00:05:26.916 "product_name": "Malloc disk", 00:05:26.916 "block_size": 4096, 00:05:26.916 "num_blocks": 256, 00:05:26.916 "uuid": "05b4836d-7bda-4325-b0f3-e5b36e2897ed", 00:05:26.916 "assigned_rate_limits": { 00:05:26.916 "rw_ios_per_sec": 0, 00:05:26.916 "rw_mbytes_per_sec": 0, 00:05:26.916 "r_mbytes_per_sec": 0, 00:05:26.916 "w_mbytes_per_sec": 0 00:05:26.916 }, 00:05:26.916 "claimed": false, 00:05:26.916 "zoned": false, 00:05:26.916 "supported_io_types": { 00:05:26.916 "read": true, 00:05:26.916 "write": true, 00:05:26.916 "unmap": true, 00:05:26.916 "flush": true, 00:05:26.916 "reset": true, 00:05:26.916 "nvme_admin": false, 00:05:26.916 "nvme_io": false, 00:05:26.916 "nvme_io_md": false, 00:05:26.916 "write_zeroes": true, 00:05:26.916 "zcopy": true, 00:05:26.916 "get_zone_info": false, 00:05:26.916 "zone_management": false, 00:05:26.916 "zone_append": false, 00:05:26.916 "compare": false, 00:05:26.916 "compare_and_write": false, 00:05:26.916 "abort": true, 00:05:26.916 "seek_hole": false, 00:05:26.916 "seek_data": false, 00:05:26.916 "copy": true, 00:05:26.916 "nvme_iov_md": false 00:05:26.916 }, 00:05:26.916 "memory_domains": [ 00:05:26.916 { 00:05:26.916 "dma_device_id": "system", 00:05:26.916 "dma_device_type": 1 00:05:26.916 }, 00:05:26.916 { 00:05:26.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.916 "dma_device_type": 2 00:05:26.916 } 00:05:26.916 ], 00:05:26.916 "driver_specific": {} 00:05:26.916 } 00:05:26.916 ]' 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.916 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.916 ************************************ 00:05:26.916 END TEST rpc_plugins 00:05:26.916 ************************************ 00:05:26.916 14:38:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.916 00:05:26.916 real 0m0.116s 00:05:26.917 user 0m0.061s 00:05:26.917 sys 0m0.014s 00:05:26.917 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.917 14:38:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.917 14:38:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.917 14:38:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.917 14:38:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.917 14:38:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.917 ************************************ 00:05:26.917 START TEST rpc_trace_cmd_test 00:05:26.917 ************************************ 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.917 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58402", 00:05:26.917 "tpoint_group_mask": "0x8", 00:05:26.917 "iscsi_conn": { 00:05:26.917 "mask": "0x2", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "scsi": { 00:05:26.917 "mask": "0x4", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "bdev": { 00:05:26.917 "mask": "0x8", 00:05:26.917 "tpoint_mask": "0xffffffffffffffff" 00:05:26.917 }, 00:05:26.917 "nvmf_rdma": { 00:05:26.917 "mask": "0x10", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "nvmf_tcp": { 00:05:26.917 "mask": "0x20", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "ftl": { 00:05:26.917 "mask": "0x40", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "blobfs": { 00:05:26.917 "mask": "0x80", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "dsa": { 00:05:26.917 "mask": "0x200", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "thread": { 00:05:26.917 "mask": "0x400", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "nvme_pcie": { 00:05:26.917 "mask": "0x800", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "iaa": { 00:05:26.917 "mask": "0x1000", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "nvme_tcp": { 00:05:26.917 "mask": "0x2000", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "bdev_nvme": { 00:05:26.917 "mask": "0x4000", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "sock": { 00:05:26.917 "mask": "0x8000", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "blob": { 00:05:26.917 "mask": "0x10000", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "bdev_raid": { 00:05:26.917 "mask": "0x20000", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 }, 00:05:26.917 "scheduler": { 00:05:26.917 "mask": "0x40000", 00:05:26.917 "tpoint_mask": "0x0" 00:05:26.917 } 00:05:26.917 }' 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:26.917 14:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.917 14:38:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.917 14:38:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:27.175 14:38:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:27.175 14:38:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:27.175 14:38:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:27.175 14:38:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:27.175 14:38:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:27.175 00:05:27.175 real 0m0.171s 00:05:27.175 user 0m0.141s 00:05:27.175 sys 0m0.020s 00:05:27.175 14:38:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.175 14:38:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.175 ************************************ 00:05:27.175 END TEST rpc_trace_cmd_test 00:05:27.175 ************************************ 00:05:27.175 14:38:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.175 14:38:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.175 14:38:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.175 14:38:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.175 14:38:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.175 14:38:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.175 ************************************ 00:05:27.175 START TEST rpc_daemon_integrity 00:05:27.175 ************************************ 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.175 { 00:05:27.175 "name": "Malloc2", 00:05:27.175 "aliases": [ 00:05:27.175 "e145b5c7-f393-4fd8-ad44-3bf359c636bc" 00:05:27.175 ], 00:05:27.175 "product_name": "Malloc disk", 00:05:27.175 "block_size": 512, 00:05:27.175 "num_blocks": 16384, 00:05:27.175 "uuid": "e145b5c7-f393-4fd8-ad44-3bf359c636bc", 00:05:27.175 "assigned_rate_limits": { 00:05:27.175 "rw_ios_per_sec": 0, 00:05:27.175 "rw_mbytes_per_sec": 0, 00:05:27.175 "r_mbytes_per_sec": 0, 00:05:27.175 "w_mbytes_per_sec": 0 00:05:27.175 }, 00:05:27.175 "claimed": false, 00:05:27.175 "zoned": false, 00:05:27.175 "supported_io_types": { 00:05:27.175 "read": true, 00:05:27.175 "write": true, 00:05:27.175 "unmap": true, 00:05:27.175 "flush": true, 00:05:27.175 "reset": true, 00:05:27.175 "nvme_admin": false, 00:05:27.175 "nvme_io": false, 00:05:27.175 "nvme_io_md": false, 00:05:27.175 "write_zeroes": true, 00:05:27.175 "zcopy": true, 00:05:27.175 "get_zone_info": false, 00:05:27.175 "zone_management": false, 00:05:27.175 "zone_append": false, 00:05:27.175 "compare": false, 00:05:27.175 "compare_and_write": false, 00:05:27.175 "abort": true, 00:05:27.175 "seek_hole": false, 00:05:27.175 "seek_data": false, 00:05:27.175 "copy": true, 00:05:27.175 "nvme_iov_md": false 00:05:27.175 }, 00:05:27.175 "memory_domains": [ 00:05:27.175 { 00:05:27.175 "dma_device_id": "system", 00:05:27.175 "dma_device_type": 1 00:05:27.175 }, 00:05:27.175 { 00:05:27.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.175 "dma_device_type": 2 00:05:27.175 } 00:05:27.175 ], 00:05:27.175 "driver_specific": {} 00:05:27.175 } 00:05:27.175 ]' 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.175 [2024-12-09 14:38:05.277360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.175 [2024-12-09 14:38:05.277444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.175 [2024-12-09 14:38:05.277470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:27.175 [2024-12-09 14:38:05.277487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.175 [2024-12-09 14:38:05.279927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.175 [2024-12-09 14:38:05.279967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.175 Passthru0 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.175 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.432 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.432 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.432 { 00:05:27.432 "name": "Malloc2", 00:05:27.432 "aliases": [ 00:05:27.432 "e145b5c7-f393-4fd8-ad44-3bf359c636bc" 00:05:27.432 ], 00:05:27.432 "product_name": "Malloc disk", 00:05:27.432 "block_size": 512, 00:05:27.432 "num_blocks": 16384, 00:05:27.432 "uuid": "e145b5c7-f393-4fd8-ad44-3bf359c636bc", 00:05:27.432 "assigned_rate_limits": { 00:05:27.432 "rw_ios_per_sec": 0, 00:05:27.432 "rw_mbytes_per_sec": 0, 00:05:27.432 "r_mbytes_per_sec": 0, 00:05:27.432 "w_mbytes_per_sec": 0 00:05:27.432 }, 00:05:27.432 "claimed": true, 00:05:27.432 "claim_type": "exclusive_write", 00:05:27.432 "zoned": false, 00:05:27.432 "supported_io_types": { 00:05:27.432 "read": true, 00:05:27.432 "write": true, 00:05:27.432 "unmap": true, 00:05:27.432 "flush": true, 00:05:27.432 "reset": true, 00:05:27.432 "nvme_admin": false, 00:05:27.432 "nvme_io": false, 00:05:27.432 "nvme_io_md": false, 00:05:27.432 "write_zeroes": true, 00:05:27.432 "zcopy": true, 00:05:27.432 "get_zone_info": false, 00:05:27.432 "zone_management": false, 00:05:27.432 "zone_append": false, 00:05:27.432 "compare": false, 00:05:27.432 "compare_and_write": false, 00:05:27.432 "abort": true, 00:05:27.432 "seek_hole": false, 00:05:27.432 "seek_data": false, 00:05:27.432 "copy": true, 00:05:27.432 "nvme_iov_md": false 00:05:27.432 }, 00:05:27.432 "memory_domains": [ 00:05:27.432 { 00:05:27.432 "dma_device_id": "system", 00:05:27.432 "dma_device_type": 1 00:05:27.432 }, 00:05:27.432 { 00:05:27.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.432 "dma_device_type": 2 00:05:27.432 } 00:05:27.432 ], 00:05:27.432 "driver_specific": {} 00:05:27.432 }, 00:05:27.432 { 00:05:27.432 "name": "Passthru0", 00:05:27.432 "aliases": [ 00:05:27.432 "07752153-20e8-58fe-80ec-2dca8388b7d7" 00:05:27.432 ], 00:05:27.432 "product_name": "passthru", 00:05:27.432 "block_size": 512, 00:05:27.432 "num_blocks": 16384, 00:05:27.432 "uuid": "07752153-20e8-58fe-80ec-2dca8388b7d7", 00:05:27.432 "assigned_rate_limits": { 00:05:27.432 "rw_ios_per_sec": 0, 00:05:27.432 "rw_mbytes_per_sec": 0, 00:05:27.432 "r_mbytes_per_sec": 0, 00:05:27.432 "w_mbytes_per_sec": 0 00:05:27.432 }, 00:05:27.432 "claimed": false, 00:05:27.432 "zoned": false, 00:05:27.432 "supported_io_types": { 00:05:27.432 "read": true, 00:05:27.432 "write": true, 00:05:27.432 "unmap": true, 00:05:27.432 "flush": true, 00:05:27.432 "reset": true, 00:05:27.432 "nvme_admin": false, 00:05:27.432 "nvme_io": false, 00:05:27.432 "nvme_io_md": false, 00:05:27.432 "write_zeroes": true, 00:05:27.432 "zcopy": true, 00:05:27.432 "get_zone_info": false, 00:05:27.432 "zone_management": false, 00:05:27.432 "zone_append": false, 00:05:27.432 "compare": false, 00:05:27.432 "compare_and_write": false, 00:05:27.432 "abort": true, 00:05:27.432 "seek_hole": false, 00:05:27.433 "seek_data": false, 00:05:27.433 "copy": true, 00:05:27.433 "nvme_iov_md": false 00:05:27.433 }, 00:05:27.433 "memory_domains": [ 00:05:27.433 { 00:05:27.433 "dma_device_id": "system", 00:05:27.433 "dma_device_type": 1 00:05:27.433 }, 00:05:27.433 { 00:05:27.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.433 "dma_device_type": 2 00:05:27.433 } 00:05:27.433 ], 00:05:27.433 "driver_specific": { 00:05:27.433 "passthru": { 00:05:27.433 "name": "Passthru0", 00:05:27.433 "base_bdev_name": "Malloc2" 00:05:27.433 } 00:05:27.433 } 00:05:27.433 } 00:05:27.433 ]' 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.433 ************************************ 00:05:27.433 END TEST rpc_daemon_integrity 00:05:27.433 ************************************ 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.433 00:05:27.433 real 0m0.255s 00:05:27.433 user 0m0.133s 00:05:27.433 sys 0m0.032s 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.433 14:38:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.433 14:38:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.433 14:38:05 rpc -- rpc/rpc.sh@84 -- # killprocess 58402 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 58402 ']' 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@958 -- # kill -0 58402 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58402 00:05:27.433 killing process with pid 58402 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58402' 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@973 -- # kill 58402 00:05:27.433 14:38:05 rpc -- common/autotest_common.sh@978 -- # wait 58402 00:05:29.335 00:05:29.335 real 0m3.698s 00:05:29.335 user 0m4.102s 00:05:29.335 sys 0m0.619s 00:05:29.335 ************************************ 00:05:29.335 END TEST rpc 00:05:29.335 ************************************ 00:05:29.335 14:38:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.335 14:38:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.335 14:38:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:29.335 14:38:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.335 14:38:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.335 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:05:29.335 ************************************ 00:05:29.335 START TEST skip_rpc 00:05:29.335 ************************************ 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:29.335 * Looking for test storage... 00:05:29.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.335 14:38:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.335 --rc genhtml_branch_coverage=1 00:05:29.335 --rc genhtml_function_coverage=1 00:05:29.335 --rc genhtml_legend=1 00:05:29.335 --rc geninfo_all_blocks=1 00:05:29.335 --rc geninfo_unexecuted_blocks=1 00:05:29.335 00:05:29.335 ' 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.335 --rc genhtml_branch_coverage=1 00:05:29.335 --rc genhtml_function_coverage=1 00:05:29.335 --rc genhtml_legend=1 00:05:29.335 --rc geninfo_all_blocks=1 00:05:29.335 --rc geninfo_unexecuted_blocks=1 00:05:29.335 00:05:29.335 ' 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.335 --rc genhtml_branch_coverage=1 00:05:29.335 --rc genhtml_function_coverage=1 00:05:29.335 --rc genhtml_legend=1 00:05:29.335 --rc geninfo_all_blocks=1 00:05:29.335 --rc geninfo_unexecuted_blocks=1 00:05:29.335 00:05:29.335 ' 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.335 --rc genhtml_branch_coverage=1 00:05:29.335 --rc genhtml_function_coverage=1 00:05:29.335 --rc genhtml_legend=1 00:05:29.335 --rc geninfo_all_blocks=1 00:05:29.335 --rc geninfo_unexecuted_blocks=1 00:05:29.335 00:05:29.335 ' 00:05:29.335 14:38:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.335 14:38:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:29.335 14:38:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.335 14:38:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.335 ************************************ 00:05:29.335 START TEST skip_rpc 00:05:29.335 ************************************ 00:05:29.335 14:38:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:29.335 14:38:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58615 00:05:29.335 14:38:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.335 14:38:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:29.335 14:38:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:29.335 [2024-12-09 14:38:07.335941] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:29.335 [2024-12-09 14:38:07.336076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58615 ] 00:05:29.593 [2024-12-09 14:38:07.498834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.593 [2024-12-09 14:38:07.618306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58615 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58615 ']' 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58615 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58615 00:05:34.923 killing process with pid 58615 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58615' 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58615 00:05:34.923 14:38:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58615 00:05:35.863 00:05:35.863 real 0m6.548s 00:05:35.863 user 0m6.114s 00:05:35.863 sys 0m0.327s 00:05:35.863 14:38:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.863 14:38:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.863 ************************************ 00:05:35.863 END TEST skip_rpc 00:05:35.863 ************************************ 00:05:35.863 14:38:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:35.863 14:38:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.863 14:38:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.863 14:38:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.863 ************************************ 00:05:35.863 START TEST skip_rpc_with_json 00:05:35.863 ************************************ 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58713 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58713 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58713 ']' 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.863 14:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.863 [2024-12-09 14:38:13.926813] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:35.863 [2024-12-09 14:38:13.927296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58713 ] 00:05:36.124 [2024-12-09 14:38:14.089892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.124 [2024-12-09 14:38:14.193915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.068 [2024-12-09 14:38:14.837781] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.068 request: 00:05:37.068 { 00:05:37.068 "trtype": "tcp", 00:05:37.068 "method": "nvmf_get_transports", 00:05:37.068 "req_id": 1 00:05:37.068 } 00:05:37.068 Got JSON-RPC error response 00:05:37.068 response: 00:05:37.068 { 00:05:37.068 "code": -19, 00:05:37.068 "message": "No such device" 00:05:37.068 } 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.068 [2024-12-09 14:38:14.849900] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.068 14:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.069 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.069 14:38:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:37.069 { 00:05:37.069 "subsystems": [ 00:05:37.069 { 00:05:37.069 "subsystem": "fsdev", 00:05:37.069 "config": [ 00:05:37.069 { 00:05:37.069 "method": "fsdev_set_opts", 00:05:37.069 "params": { 00:05:37.069 "fsdev_io_pool_size": 65535, 00:05:37.069 "fsdev_io_cache_size": 256 00:05:37.069 } 00:05:37.069 } 00:05:37.069 ] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "keyring", 00:05:37.069 "config": [] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "iobuf", 00:05:37.069 "config": [ 00:05:37.069 { 00:05:37.069 "method": "iobuf_set_options", 00:05:37.069 "params": { 00:05:37.069 "small_pool_count": 8192, 00:05:37.069 "large_pool_count": 1024, 00:05:37.069 "small_bufsize": 8192, 00:05:37.069 "large_bufsize": 135168, 00:05:37.069 "enable_numa": false 00:05:37.069 } 00:05:37.069 } 00:05:37.069 ] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "sock", 00:05:37.069 "config": [ 00:05:37.069 { 00:05:37.069 "method": "sock_set_default_impl", 00:05:37.069 "params": { 00:05:37.069 "impl_name": "posix" 00:05:37.069 } 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "method": "sock_impl_set_options", 00:05:37.069 "params": { 00:05:37.069 "impl_name": "ssl", 00:05:37.069 "recv_buf_size": 4096, 00:05:37.069 "send_buf_size": 4096, 00:05:37.069 "enable_recv_pipe": true, 00:05:37.069 "enable_quickack": false, 00:05:37.069 "enable_placement_id": 0, 00:05:37.069 "enable_zerocopy_send_server": true, 00:05:37.069 "enable_zerocopy_send_client": false, 00:05:37.069 "zerocopy_threshold": 0, 00:05:37.069 "tls_version": 0, 00:05:37.069 "enable_ktls": false 00:05:37.069 } 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "method": "sock_impl_set_options", 00:05:37.069 "params": { 00:05:37.069 "impl_name": "posix", 00:05:37.069 "recv_buf_size": 2097152, 00:05:37.069 "send_buf_size": 2097152, 00:05:37.069 "enable_recv_pipe": true, 00:05:37.069 "enable_quickack": false, 00:05:37.069 "enable_placement_id": 0, 00:05:37.069 "enable_zerocopy_send_server": true, 00:05:37.069 "enable_zerocopy_send_client": false, 00:05:37.069 "zerocopy_threshold": 0, 00:05:37.069 "tls_version": 0, 00:05:37.069 "enable_ktls": false 00:05:37.069 } 00:05:37.069 } 00:05:37.069 ] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "vmd", 00:05:37.069 "config": [] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "accel", 00:05:37.069 "config": [ 00:05:37.069 { 00:05:37.069 "method": "accel_set_options", 00:05:37.069 "params": { 00:05:37.069 "small_cache_size": 128, 00:05:37.069 "large_cache_size": 16, 00:05:37.069 "task_count": 2048, 00:05:37.069 "sequence_count": 2048, 00:05:37.069 "buf_count": 2048 00:05:37.069 } 00:05:37.069 } 00:05:37.069 ] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "bdev", 00:05:37.069 "config": [ 00:05:37.069 { 00:05:37.069 "method": "bdev_set_options", 00:05:37.069 "params": { 00:05:37.069 "bdev_io_pool_size": 65535, 00:05:37.069 "bdev_io_cache_size": 256, 00:05:37.069 "bdev_auto_examine": true, 00:05:37.069 "iobuf_small_cache_size": 128, 00:05:37.069 "iobuf_large_cache_size": 16 00:05:37.069 } 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "method": "bdev_raid_set_options", 00:05:37.069 "params": { 00:05:37.069 "process_window_size_kb": 1024, 00:05:37.069 "process_max_bandwidth_mb_sec": 0 00:05:37.069 } 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "method": "bdev_iscsi_set_options", 00:05:37.069 "params": { 00:05:37.069 "timeout_sec": 30 00:05:37.069 } 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "method": "bdev_nvme_set_options", 00:05:37.069 "params": { 00:05:37.069 "action_on_timeout": "none", 00:05:37.069 "timeout_us": 0, 00:05:37.069 "timeout_admin_us": 0, 00:05:37.069 "keep_alive_timeout_ms": 10000, 00:05:37.069 "arbitration_burst": 0, 00:05:37.069 "low_priority_weight": 0, 00:05:37.069 "medium_priority_weight": 0, 00:05:37.069 "high_priority_weight": 0, 00:05:37.069 "nvme_adminq_poll_period_us": 10000, 00:05:37.069 "nvme_ioq_poll_period_us": 0, 00:05:37.069 "io_queue_requests": 0, 00:05:37.069 "delay_cmd_submit": true, 00:05:37.069 "transport_retry_count": 4, 00:05:37.069 "bdev_retry_count": 3, 00:05:37.069 "transport_ack_timeout": 0, 00:05:37.069 "ctrlr_loss_timeout_sec": 0, 00:05:37.069 "reconnect_delay_sec": 0, 00:05:37.069 "fast_io_fail_timeout_sec": 0, 00:05:37.069 "disable_auto_failback": false, 00:05:37.069 "generate_uuids": false, 00:05:37.069 "transport_tos": 0, 00:05:37.069 "nvme_error_stat": false, 00:05:37.069 "rdma_srq_size": 0, 00:05:37.069 "io_path_stat": false, 00:05:37.069 "allow_accel_sequence": false, 00:05:37.069 "rdma_max_cq_size": 0, 00:05:37.069 "rdma_cm_event_timeout_ms": 0, 00:05:37.069 "dhchap_digests": [ 00:05:37.069 "sha256", 00:05:37.069 "sha384", 00:05:37.069 "sha512" 00:05:37.069 ], 00:05:37.069 "dhchap_dhgroups": [ 00:05:37.069 "null", 00:05:37.069 "ffdhe2048", 00:05:37.069 "ffdhe3072", 00:05:37.069 "ffdhe4096", 00:05:37.069 "ffdhe6144", 00:05:37.069 "ffdhe8192" 00:05:37.069 ] 00:05:37.069 } 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "method": "bdev_nvme_set_hotplug", 00:05:37.069 "params": { 00:05:37.069 "period_us": 100000, 00:05:37.069 "enable": false 00:05:37.069 } 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "method": "bdev_wait_for_examine" 00:05:37.069 } 00:05:37.069 ] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "scsi", 00:05:37.069 "config": null 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "scheduler", 00:05:37.069 "config": [ 00:05:37.069 { 00:05:37.069 "method": "framework_set_scheduler", 00:05:37.069 "params": { 00:05:37.069 "name": "static" 00:05:37.069 } 00:05:37.069 } 00:05:37.069 ] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "vhost_scsi", 00:05:37.069 "config": [] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "vhost_blk", 00:05:37.069 "config": [] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "ublk", 00:05:37.069 "config": [] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "nbd", 00:05:37.069 "config": [] 00:05:37.069 }, 00:05:37.069 { 00:05:37.069 "subsystem": "nvmf", 00:05:37.069 "config": [ 00:05:37.069 { 00:05:37.069 "method": "nvmf_set_config", 00:05:37.069 "params": { 00:05:37.069 "discovery_filter": "match_any", 00:05:37.069 "admin_cmd_passthru": { 00:05:37.069 "identify_ctrlr": false 00:05:37.069 }, 00:05:37.069 "dhchap_digests": [ 00:05:37.069 "sha256", 00:05:37.069 "sha384", 00:05:37.069 "sha512" 00:05:37.069 ], 00:05:37.069 "dhchap_dhgroups": [ 00:05:37.070 "null", 00:05:37.070 "ffdhe2048", 00:05:37.070 "ffdhe3072", 00:05:37.070 "ffdhe4096", 00:05:37.070 "ffdhe6144", 00:05:37.070 "ffdhe8192" 00:05:37.070 ] 00:05:37.070 } 00:05:37.070 }, 00:05:37.070 { 00:05:37.070 "method": "nvmf_set_max_subsystems", 00:05:37.070 "params": { 00:05:37.070 "max_subsystems": 1024 00:05:37.070 } 00:05:37.070 }, 00:05:37.070 { 00:05:37.070 "method": "nvmf_set_crdt", 00:05:37.070 "params": { 00:05:37.070 "crdt1": 0, 00:05:37.070 "crdt2": 0, 00:05:37.070 "crdt3": 0 00:05:37.070 } 00:05:37.070 }, 00:05:37.070 { 00:05:37.070 "method": "nvmf_create_transport", 00:05:37.070 "params": { 00:05:37.070 "trtype": "TCP", 00:05:37.070 "max_queue_depth": 128, 00:05:37.070 "max_io_qpairs_per_ctrlr": 127, 00:05:37.070 "in_capsule_data_size": 4096, 00:05:37.070 "max_io_size": 131072, 00:05:37.070 "io_unit_size": 131072, 00:05:37.070 "max_aq_depth": 128, 00:05:37.070 "num_shared_buffers": 511, 00:05:37.070 "buf_cache_size": 4294967295, 00:05:37.070 "dif_insert_or_strip": false, 00:05:37.070 "zcopy": false, 00:05:37.070 "c2h_success": true, 00:05:37.070 "sock_priority": 0, 00:05:37.070 "abort_timeout_sec": 1, 00:05:37.070 "ack_timeout": 0, 00:05:37.070 "data_wr_pool_size": 0 00:05:37.070 } 00:05:37.070 } 00:05:37.070 ] 00:05:37.070 }, 00:05:37.070 { 00:05:37.070 "subsystem": "iscsi", 00:05:37.070 "config": [ 00:05:37.070 { 00:05:37.070 "method": "iscsi_set_options", 00:05:37.070 "params": { 00:05:37.070 "node_base": "iqn.2016-06.io.spdk", 00:05:37.070 "max_sessions": 128, 00:05:37.070 "max_connections_per_session": 2, 00:05:37.070 "max_queue_depth": 64, 00:05:37.070 "default_time2wait": 2, 00:05:37.070 "default_time2retain": 20, 00:05:37.070 "first_burst_length": 8192, 00:05:37.070 "immediate_data": true, 00:05:37.070 "allow_duplicated_isid": false, 00:05:37.070 "error_recovery_level": 0, 00:05:37.070 "nop_timeout": 60, 00:05:37.070 "nop_in_interval": 30, 00:05:37.070 "disable_chap": false, 00:05:37.070 "require_chap": false, 00:05:37.070 "mutual_chap": false, 00:05:37.070 "chap_group": 0, 00:05:37.070 "max_large_datain_per_connection": 64, 00:05:37.070 "max_r2t_per_connection": 4, 00:05:37.070 "pdu_pool_size": 36864, 00:05:37.070 "immediate_data_pool_size": 16384, 00:05:37.070 "data_out_pool_size": 2048 00:05:37.070 } 00:05:37.070 } 00:05:37.070 ] 00:05:37.070 } 00:05:37.070 ] 00:05:37.070 } 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58713 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58713 ']' 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58713 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58713 00:05:37.070 killing process with pid 58713 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58713' 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58713 00:05:37.070 14:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58713 00:05:38.519 14:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58753 00:05:38.519 14:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.519 14:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58753 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58753 ']' 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58753 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58753 00:05:43.807 killing process with pid 58753 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58753' 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58753 00:05:43.807 14:38:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58753 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:45.195 00:05:45.195 real 0m9.094s 00:05:45.195 user 0m8.622s 00:05:45.195 sys 0m0.706s 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.195 ************************************ 00:05:45.195 END TEST skip_rpc_with_json 00:05:45.195 ************************************ 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.195 14:38:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:45.195 14:38:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.195 14:38:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.195 14:38:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.195 ************************************ 00:05:45.195 START TEST skip_rpc_with_delay 00:05:45.195 ************************************ 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:45.195 14:38:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.195 [2024-12-09 14:38:23.081908] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:45.195 14:38:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:45.195 14:38:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.195 14:38:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.195 14:38:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.195 00:05:45.195 real 0m0.144s 00:05:45.195 user 0m0.066s 00:05:45.195 sys 0m0.076s 00:05:45.195 14:38:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.195 14:38:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:45.195 ************************************ 00:05:45.195 END TEST skip_rpc_with_delay 00:05:45.195 ************************************ 00:05:45.195 14:38:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:45.195 14:38:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:45.195 14:38:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:45.195 14:38:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.195 14:38:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.195 14:38:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.195 ************************************ 00:05:45.195 START TEST exit_on_failed_rpc_init 00:05:45.195 ************************************ 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58875 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58875 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58875 ']' 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.195 14:38:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.195 [2024-12-09 14:38:23.294719] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:45.195 [2024-12-09 14:38:23.294900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58875 ] 00:05:45.456 [2024-12-09 14:38:23.457246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.456 [2024-12-09 14:38:23.557307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:46.400 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.400 [2024-12-09 14:38:24.249216] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:46.400 [2024-12-09 14:38:24.249342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58893 ] 00:05:46.400 [2024-12-09 14:38:24.407990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.661 [2024-12-09 14:38:24.524449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.661 [2024-12-09 14:38:24.524682] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:46.661 [2024-12-09 14:38:24.524703] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:46.661 [2024-12-09 14:38:24.524717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58875 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58875 ']' 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58875 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58875 00:05:46.661 killing process with pid 58875 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58875' 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58875 00:05:46.661 14:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58875 00:05:48.570 ************************************ 00:05:48.570 END TEST exit_on_failed_rpc_init 00:05:48.570 ************************************ 00:05:48.570 00:05:48.570 real 0m3.132s 00:05:48.570 user 0m3.441s 00:05:48.570 sys 0m0.483s 00:05:48.570 14:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.570 14:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 14:38:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.570 ************************************ 00:05:48.570 END TEST skip_rpc 00:05:48.570 ************************************ 00:05:48.570 00:05:48.570 real 0m19.273s 00:05:48.570 user 0m18.380s 00:05:48.570 sys 0m1.781s 00:05:48.570 14:38:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.570 14:38:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 14:38:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:48.570 14:38:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.570 14:38:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.570 14:38:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 ************************************ 00:05:48.570 START TEST rpc_client 00:05:48.570 ************************************ 00:05:48.570 14:38:26 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:48.570 * Looking for test storage... 00:05:48.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.571 14:38:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.571 --rc genhtml_branch_coverage=1 00:05:48.571 --rc genhtml_function_coverage=1 00:05:48.571 --rc genhtml_legend=1 00:05:48.571 --rc geninfo_all_blocks=1 00:05:48.571 --rc geninfo_unexecuted_blocks=1 00:05:48.571 00:05:48.571 ' 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.571 --rc genhtml_branch_coverage=1 00:05:48.571 --rc genhtml_function_coverage=1 00:05:48.571 --rc genhtml_legend=1 00:05:48.571 --rc geninfo_all_blocks=1 00:05:48.571 --rc geninfo_unexecuted_blocks=1 00:05:48.571 00:05:48.571 ' 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.571 --rc genhtml_branch_coverage=1 00:05:48.571 --rc genhtml_function_coverage=1 00:05:48.571 --rc genhtml_legend=1 00:05:48.571 --rc geninfo_all_blocks=1 00:05:48.571 --rc geninfo_unexecuted_blocks=1 00:05:48.571 00:05:48.571 ' 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.571 --rc genhtml_branch_coverage=1 00:05:48.571 --rc genhtml_function_coverage=1 00:05:48.571 --rc genhtml_legend=1 00:05:48.571 --rc geninfo_all_blocks=1 00:05:48.571 --rc geninfo_unexecuted_blocks=1 00:05:48.571 00:05:48.571 ' 00:05:48.571 14:38:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:48.571 OK 00:05:48.571 14:38:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:48.571 00:05:48.571 real 0m0.200s 00:05:48.571 user 0m0.115s 00:05:48.571 sys 0m0.089s 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.571 14:38:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:48.571 ************************************ 00:05:48.571 END TEST rpc_client 00:05:48.571 ************************************ 00:05:48.571 14:38:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:48.571 14:38:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.571 14:38:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.571 14:38:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.571 ************************************ 00:05:48.571 START TEST json_config 00:05:48.571 ************************************ 00:05:48.571 14:38:26 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.832 14:38:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.832 14:38:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.832 14:38:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.832 14:38:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.832 14:38:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.832 14:38:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.832 14:38:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.832 14:38:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:48.832 14:38:26 json_config -- scripts/common.sh@345 -- # : 1 00:05:48.832 14:38:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.832 14:38:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.832 14:38:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:48.832 14:38:26 json_config -- scripts/common.sh@353 -- # local d=1 00:05:48.832 14:38:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.832 14:38:26 json_config -- scripts/common.sh@355 -- # echo 1 00:05:48.832 14:38:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.832 14:38:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@353 -- # local d=2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.832 14:38:26 json_config -- scripts/common.sh@355 -- # echo 2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.832 14:38:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.832 14:38:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.832 14:38:26 json_config -- scripts/common.sh@368 -- # return 0 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.832 --rc genhtml_branch_coverage=1 00:05:48.832 --rc genhtml_function_coverage=1 00:05:48.832 --rc genhtml_legend=1 00:05:48.832 --rc geninfo_all_blocks=1 00:05:48.832 --rc geninfo_unexecuted_blocks=1 00:05:48.832 00:05:48.832 ' 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.832 --rc genhtml_branch_coverage=1 00:05:48.832 --rc genhtml_function_coverage=1 00:05:48.832 --rc genhtml_legend=1 00:05:48.832 --rc geninfo_all_blocks=1 00:05:48.832 --rc geninfo_unexecuted_blocks=1 00:05:48.832 00:05:48.832 ' 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.832 --rc genhtml_branch_coverage=1 00:05:48.832 --rc genhtml_function_coverage=1 00:05:48.832 --rc genhtml_legend=1 00:05:48.832 --rc geninfo_all_blocks=1 00:05:48.832 --rc geninfo_unexecuted_blocks=1 00:05:48.832 00:05:48.832 ' 00:05:48.832 14:38:26 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.832 --rc genhtml_branch_coverage=1 00:05:48.832 --rc genhtml_function_coverage=1 00:05:48.832 --rc genhtml_legend=1 00:05:48.832 --rc geninfo_all_blocks=1 00:05:48.832 --rc geninfo_unexecuted_blocks=1 00:05:48.832 00:05:48.832 ' 00:05:48.832 14:38:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:889ec288-9bdb-4025-81ca-1ba3f773afd1 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=889ec288-9bdb-4025-81ca-1ba3f773afd1 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.832 14:38:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:48.832 14:38:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.832 14:38:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.832 14:38:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.832 14:38:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.832 14:38:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.832 14:38:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.832 14:38:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.832 14:38:26 json_config -- paths/export.sh@5 -- # export PATH 00:05:48.833 14:38:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@51 -- # : 0 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:48.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:48.833 14:38:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:48.833 14:38:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:48.833 14:38:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:48.833 14:38:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:48.833 14:38:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:48.833 14:38:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:48.833 14:38:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:48.833 WARNING: No tests are enabled so not running JSON configuration tests 00:05:48.833 14:38:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:48.833 00:05:48.833 real 0m0.143s 00:05:48.833 user 0m0.100s 00:05:48.833 sys 0m0.044s 00:05:48.833 14:38:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.833 14:38:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.833 ************************************ 00:05:48.833 END TEST json_config 00:05:48.833 ************************************ 00:05:48.833 14:38:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:48.833 14:38:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.833 14:38:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.833 14:38:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.833 ************************************ 00:05:48.833 START TEST json_config_extra_key 00:05:48.833 ************************************ 00:05:48.833 14:38:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:48.833 14:38:26 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.833 14:38:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.833 14:38:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:49.167 14:38:26 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:49.167 14:38:26 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.167 14:38:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:49.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.167 --rc genhtml_branch_coverage=1 00:05:49.167 --rc genhtml_function_coverage=1 00:05:49.167 --rc genhtml_legend=1 00:05:49.167 --rc geninfo_all_blocks=1 00:05:49.167 --rc geninfo_unexecuted_blocks=1 00:05:49.167 00:05:49.167 ' 00:05:49.167 14:38:26 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:49.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.167 --rc genhtml_branch_coverage=1 00:05:49.167 --rc genhtml_function_coverage=1 00:05:49.167 --rc genhtml_legend=1 00:05:49.167 --rc geninfo_all_blocks=1 00:05:49.167 --rc geninfo_unexecuted_blocks=1 00:05:49.167 00:05:49.167 ' 00:05:49.167 14:38:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:49.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.167 --rc genhtml_branch_coverage=1 00:05:49.167 --rc genhtml_function_coverage=1 00:05:49.167 --rc genhtml_legend=1 00:05:49.167 --rc geninfo_all_blocks=1 00:05:49.167 --rc geninfo_unexecuted_blocks=1 00:05:49.167 00:05:49.167 ' 00:05:49.167 14:38:26 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:49.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.167 --rc genhtml_branch_coverage=1 00:05:49.167 --rc genhtml_function_coverage=1 00:05:49.167 --rc genhtml_legend=1 00:05:49.167 --rc geninfo_all_blocks=1 00:05:49.167 --rc geninfo_unexecuted_blocks=1 00:05:49.167 00:05:49.167 ' 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:889ec288-9bdb-4025-81ca-1ba3f773afd1 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=889ec288-9bdb-4025-81ca-1ba3f773afd1 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.167 14:38:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.167 14:38:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.167 14:38:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.167 14:38:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.167 14:38:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.167 14:38:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.167 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.167 14:38:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:49.167 INFO: launching applications... 00:05:49.167 14:38:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59091 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.167 Waiting for target to run... 00:05:49.167 14:38:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59091 /var/tmp/spdk_tgt.sock 00:05:49.167 14:38:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59091 ']' 00:05:49.168 14:38:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.168 14:38:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.168 14:38:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.168 14:38:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.168 14:38:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.168 14:38:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:49.168 [2024-12-09 14:38:27.076829] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:49.168 [2024-12-09 14:38:27.077089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59091 ] 00:05:49.429 [2024-12-09 14:38:27.399238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.429 [2024-12-09 14:38:27.504180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.001 14:38:28 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.001 14:38:28 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.001 00:05:50.001 14:38:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.001 INFO: shutting down applications... 00:05:50.001 14:38:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59091 ]] 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59091 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59091 00:05:50.001 14:38:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.574 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.574 14:38:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.574 14:38:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59091 00:05:50.574 14:38:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.148 14:38:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.148 14:38:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.148 14:38:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59091 00:05:51.148 14:38:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.720 14:38:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.720 14:38:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.720 14:38:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59091 00:05:51.720 14:38:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.981 14:38:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.981 14:38:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.981 14:38:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59091 00:05:51.981 SPDK target shutdown done 00:05:51.981 Success 00:05:51.981 14:38:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.981 14:38:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:51.981 14:38:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.981 14:38:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.981 14:38:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:51.981 ************************************ 00:05:51.981 END TEST json_config_extra_key 00:05:51.981 ************************************ 00:05:51.981 00:05:51.982 real 0m3.197s 00:05:51.982 user 0m2.758s 00:05:51.982 sys 0m0.442s 00:05:51.982 14:38:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.982 14:38:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.982 14:38:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.982 14:38:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.982 14:38:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.982 14:38:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.982 ************************************ 00:05:51.982 START TEST alias_rpc 00:05:51.982 ************************************ 00:05:51.982 14:38:30 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.243 * Looking for test storage... 00:05:52.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.243 14:38:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.243 --rc genhtml_branch_coverage=1 00:05:52.243 --rc genhtml_function_coverage=1 00:05:52.243 --rc genhtml_legend=1 00:05:52.243 --rc geninfo_all_blocks=1 00:05:52.243 --rc geninfo_unexecuted_blocks=1 00:05:52.243 00:05:52.243 ' 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.243 --rc genhtml_branch_coverage=1 00:05:52.243 --rc genhtml_function_coverage=1 00:05:52.243 --rc genhtml_legend=1 00:05:52.243 --rc geninfo_all_blocks=1 00:05:52.243 --rc geninfo_unexecuted_blocks=1 00:05:52.243 00:05:52.243 ' 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.243 --rc genhtml_branch_coverage=1 00:05:52.243 --rc genhtml_function_coverage=1 00:05:52.243 --rc genhtml_legend=1 00:05:52.243 --rc geninfo_all_blocks=1 00:05:52.243 --rc geninfo_unexecuted_blocks=1 00:05:52.243 00:05:52.243 ' 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.243 --rc genhtml_branch_coverage=1 00:05:52.243 --rc genhtml_function_coverage=1 00:05:52.243 --rc genhtml_legend=1 00:05:52.243 --rc geninfo_all_blocks=1 00:05:52.243 --rc geninfo_unexecuted_blocks=1 00:05:52.243 00:05:52.243 ' 00:05:52.243 14:38:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.243 14:38:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.243 14:38:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59185 00:05:52.243 14:38:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59185 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59185 ']' 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.243 14:38:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.244 [2024-12-09 14:38:30.316322] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:52.244 [2024-12-09 14:38:30.316556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59185 ] 00:05:52.503 [2024-12-09 14:38:30.472241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.503 [2024-12-09 14:38:30.593366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.444 14:38:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:53.444 14:38:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59185 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59185 ']' 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59185 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59185 00:05:53.444 killing process with pid 59185 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59185' 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 59185 00:05:53.444 14:38:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 59185 00:05:55.358 ************************************ 00:05:55.358 END TEST alias_rpc 00:05:55.358 ************************************ 00:05:55.358 00:05:55.358 real 0m3.057s 00:05:55.358 user 0m3.072s 00:05:55.358 sys 0m0.453s 00:05:55.358 14:38:33 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.358 14:38:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.358 14:38:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:55.358 14:38:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:55.358 14:38:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.358 14:38:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.358 14:38:33 -- common/autotest_common.sh@10 -- # set +x 00:05:55.358 ************************************ 00:05:55.358 START TEST spdkcli_tcp 00:05:55.358 ************************************ 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:55.358 * Looking for test storage... 00:05:55.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.358 14:38:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.358 --rc genhtml_branch_coverage=1 00:05:55.358 --rc genhtml_function_coverage=1 00:05:55.358 --rc genhtml_legend=1 00:05:55.358 --rc geninfo_all_blocks=1 00:05:55.358 --rc geninfo_unexecuted_blocks=1 00:05:55.358 00:05:55.358 ' 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.358 --rc genhtml_branch_coverage=1 00:05:55.358 --rc genhtml_function_coverage=1 00:05:55.358 --rc genhtml_legend=1 00:05:55.358 --rc geninfo_all_blocks=1 00:05:55.358 --rc geninfo_unexecuted_blocks=1 00:05:55.358 00:05:55.358 ' 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.358 --rc genhtml_branch_coverage=1 00:05:55.358 --rc genhtml_function_coverage=1 00:05:55.358 --rc genhtml_legend=1 00:05:55.358 --rc geninfo_all_blocks=1 00:05:55.358 --rc geninfo_unexecuted_blocks=1 00:05:55.358 00:05:55.358 ' 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.358 --rc genhtml_branch_coverage=1 00:05:55.358 --rc genhtml_function_coverage=1 00:05:55.358 --rc genhtml_legend=1 00:05:55.358 --rc geninfo_all_blocks=1 00:05:55.358 --rc geninfo_unexecuted_blocks=1 00:05:55.358 00:05:55.358 ' 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59281 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59281 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59281 ']' 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.358 14:38:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:55.358 14:38:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.358 [2024-12-09 14:38:33.468260] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:55.358 [2024-12-09 14:38:33.468648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59281 ] 00:05:55.620 [2024-12-09 14:38:33.644904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.881 [2024-12-09 14:38:33.767398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.881 [2024-12-09 14:38:33.767462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.454 14:38:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.454 14:38:34 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:56.454 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:56.454 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59298 00:05:56.454 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.716 [ 00:05:56.716 "bdev_malloc_delete", 00:05:56.716 "bdev_malloc_create", 00:05:56.716 "bdev_null_resize", 00:05:56.716 "bdev_null_delete", 00:05:56.716 "bdev_null_create", 00:05:56.716 "bdev_nvme_cuse_unregister", 00:05:56.716 "bdev_nvme_cuse_register", 00:05:56.716 "bdev_opal_new_user", 00:05:56.716 "bdev_opal_set_lock_state", 00:05:56.716 "bdev_opal_delete", 00:05:56.716 "bdev_opal_get_info", 00:05:56.716 "bdev_opal_create", 00:05:56.716 "bdev_nvme_opal_revert", 00:05:56.716 "bdev_nvme_opal_init", 00:05:56.716 "bdev_nvme_send_cmd", 00:05:56.716 "bdev_nvme_set_keys", 00:05:56.716 "bdev_nvme_get_path_iostat", 00:05:56.716 "bdev_nvme_get_mdns_discovery_info", 00:05:56.716 "bdev_nvme_stop_mdns_discovery", 00:05:56.716 "bdev_nvme_start_mdns_discovery", 00:05:56.716 "bdev_nvme_set_multipath_policy", 00:05:56.716 "bdev_nvme_set_preferred_path", 00:05:56.716 "bdev_nvme_get_io_paths", 00:05:56.716 "bdev_nvme_remove_error_injection", 00:05:56.716 "bdev_nvme_add_error_injection", 00:05:56.716 "bdev_nvme_get_discovery_info", 00:05:56.716 "bdev_nvme_stop_discovery", 00:05:56.716 "bdev_nvme_start_discovery", 00:05:56.716 "bdev_nvme_get_controller_health_info", 00:05:56.716 "bdev_nvme_disable_controller", 00:05:56.716 "bdev_nvme_enable_controller", 00:05:56.716 "bdev_nvme_reset_controller", 00:05:56.716 "bdev_nvme_get_transport_statistics", 00:05:56.716 "bdev_nvme_apply_firmware", 00:05:56.716 "bdev_nvme_detach_controller", 00:05:56.716 "bdev_nvme_get_controllers", 00:05:56.716 "bdev_nvme_attach_controller", 00:05:56.716 "bdev_nvme_set_hotplug", 00:05:56.716 "bdev_nvme_set_options", 00:05:56.716 "bdev_passthru_delete", 00:05:56.716 "bdev_passthru_create", 00:05:56.716 "bdev_lvol_set_parent_bdev", 00:05:56.716 "bdev_lvol_set_parent", 00:05:56.716 "bdev_lvol_check_shallow_copy", 00:05:56.716 "bdev_lvol_start_shallow_copy", 00:05:56.716 "bdev_lvol_grow_lvstore", 00:05:56.716 "bdev_lvol_get_lvols", 00:05:56.716 "bdev_lvol_get_lvstores", 00:05:56.716 "bdev_lvol_delete", 00:05:56.716 "bdev_lvol_set_read_only", 00:05:56.716 "bdev_lvol_resize", 00:05:56.716 "bdev_lvol_decouple_parent", 00:05:56.716 "bdev_lvol_inflate", 00:05:56.716 "bdev_lvol_rename", 00:05:56.716 "bdev_lvol_clone_bdev", 00:05:56.716 "bdev_lvol_clone", 00:05:56.716 "bdev_lvol_snapshot", 00:05:56.716 "bdev_lvol_create", 00:05:56.716 "bdev_lvol_delete_lvstore", 00:05:56.716 "bdev_lvol_rename_lvstore", 00:05:56.716 "bdev_lvol_create_lvstore", 00:05:56.716 "bdev_raid_set_options", 00:05:56.716 "bdev_raid_remove_base_bdev", 00:05:56.716 "bdev_raid_add_base_bdev", 00:05:56.716 "bdev_raid_delete", 00:05:56.716 "bdev_raid_create", 00:05:56.716 "bdev_raid_get_bdevs", 00:05:56.716 "bdev_error_inject_error", 00:05:56.716 "bdev_error_delete", 00:05:56.716 "bdev_error_create", 00:05:56.716 "bdev_split_delete", 00:05:56.716 "bdev_split_create", 00:05:56.716 "bdev_delay_delete", 00:05:56.716 "bdev_delay_create", 00:05:56.716 "bdev_delay_update_latency", 00:05:56.716 "bdev_zone_block_delete", 00:05:56.716 "bdev_zone_block_create", 00:05:56.716 "blobfs_create", 00:05:56.716 "blobfs_detect", 00:05:56.716 "blobfs_set_cache_size", 00:05:56.716 "bdev_xnvme_delete", 00:05:56.716 "bdev_xnvme_create", 00:05:56.716 "bdev_aio_delete", 00:05:56.716 "bdev_aio_rescan", 00:05:56.716 "bdev_aio_create", 00:05:56.716 "bdev_ftl_set_property", 00:05:56.716 "bdev_ftl_get_properties", 00:05:56.716 "bdev_ftl_get_stats", 00:05:56.716 "bdev_ftl_unmap", 00:05:56.716 "bdev_ftl_unload", 00:05:56.716 "bdev_ftl_delete", 00:05:56.716 "bdev_ftl_load", 00:05:56.716 "bdev_ftl_create", 00:05:56.716 "bdev_virtio_attach_controller", 00:05:56.716 "bdev_virtio_scsi_get_devices", 00:05:56.716 "bdev_virtio_detach_controller", 00:05:56.716 "bdev_virtio_blk_set_hotplug", 00:05:56.716 "bdev_iscsi_delete", 00:05:56.716 "bdev_iscsi_create", 00:05:56.716 "bdev_iscsi_set_options", 00:05:56.716 "accel_error_inject_error", 00:05:56.716 "ioat_scan_accel_module", 00:05:56.716 "dsa_scan_accel_module", 00:05:56.716 "iaa_scan_accel_module", 00:05:56.716 "keyring_file_remove_key", 00:05:56.716 "keyring_file_add_key", 00:05:56.716 "keyring_linux_set_options", 00:05:56.716 "fsdev_aio_delete", 00:05:56.716 "fsdev_aio_create", 00:05:56.716 "iscsi_get_histogram", 00:05:56.716 "iscsi_enable_histogram", 00:05:56.716 "iscsi_set_options", 00:05:56.716 "iscsi_get_auth_groups", 00:05:56.716 "iscsi_auth_group_remove_secret", 00:05:56.716 "iscsi_auth_group_add_secret", 00:05:56.716 "iscsi_delete_auth_group", 00:05:56.716 "iscsi_create_auth_group", 00:05:56.716 "iscsi_set_discovery_auth", 00:05:56.716 "iscsi_get_options", 00:05:56.716 "iscsi_target_node_request_logout", 00:05:56.716 "iscsi_target_node_set_redirect", 00:05:56.716 "iscsi_target_node_set_auth", 00:05:56.716 "iscsi_target_node_add_lun", 00:05:56.716 "iscsi_get_stats", 00:05:56.716 "iscsi_get_connections", 00:05:56.716 "iscsi_portal_group_set_auth", 00:05:56.716 "iscsi_start_portal_group", 00:05:56.716 "iscsi_delete_portal_group", 00:05:56.716 "iscsi_create_portal_group", 00:05:56.716 "iscsi_get_portal_groups", 00:05:56.716 "iscsi_delete_target_node", 00:05:56.716 "iscsi_target_node_remove_pg_ig_maps", 00:05:56.716 "iscsi_target_node_add_pg_ig_maps", 00:05:56.716 "iscsi_create_target_node", 00:05:56.716 "iscsi_get_target_nodes", 00:05:56.716 "iscsi_delete_initiator_group", 00:05:56.716 "iscsi_initiator_group_remove_initiators", 00:05:56.716 "iscsi_initiator_group_add_initiators", 00:05:56.716 "iscsi_create_initiator_group", 00:05:56.716 "iscsi_get_initiator_groups", 00:05:56.716 "nvmf_set_crdt", 00:05:56.716 "nvmf_set_config", 00:05:56.716 "nvmf_set_max_subsystems", 00:05:56.717 "nvmf_stop_mdns_prr", 00:05:56.717 "nvmf_publish_mdns_prr", 00:05:56.717 "nvmf_subsystem_get_listeners", 00:05:56.717 "nvmf_subsystem_get_qpairs", 00:05:56.717 "nvmf_subsystem_get_controllers", 00:05:56.717 "nvmf_get_stats", 00:05:56.717 "nvmf_get_transports", 00:05:56.717 "nvmf_create_transport", 00:05:56.717 "nvmf_get_targets", 00:05:56.717 "nvmf_delete_target", 00:05:56.717 "nvmf_create_target", 00:05:56.717 "nvmf_subsystem_allow_any_host", 00:05:56.717 "nvmf_subsystem_set_keys", 00:05:56.717 "nvmf_subsystem_remove_host", 00:05:56.717 "nvmf_subsystem_add_host", 00:05:56.717 "nvmf_ns_remove_host", 00:05:56.717 "nvmf_ns_add_host", 00:05:56.717 "nvmf_subsystem_remove_ns", 00:05:56.717 "nvmf_subsystem_set_ns_ana_group", 00:05:56.717 "nvmf_subsystem_add_ns", 00:05:56.717 "nvmf_subsystem_listener_set_ana_state", 00:05:56.717 "nvmf_discovery_get_referrals", 00:05:56.717 "nvmf_discovery_remove_referral", 00:05:56.717 "nvmf_discovery_add_referral", 00:05:56.717 "nvmf_subsystem_remove_listener", 00:05:56.717 "nvmf_subsystem_add_listener", 00:05:56.717 "nvmf_delete_subsystem", 00:05:56.717 "nvmf_create_subsystem", 00:05:56.717 "nvmf_get_subsystems", 00:05:56.717 "env_dpdk_get_mem_stats", 00:05:56.717 "nbd_get_disks", 00:05:56.717 "nbd_stop_disk", 00:05:56.717 "nbd_start_disk", 00:05:56.717 "ublk_recover_disk", 00:05:56.717 "ublk_get_disks", 00:05:56.717 "ublk_stop_disk", 00:05:56.717 "ublk_start_disk", 00:05:56.717 "ublk_destroy_target", 00:05:56.717 "ublk_create_target", 00:05:56.717 "virtio_blk_create_transport", 00:05:56.717 "virtio_blk_get_transports", 00:05:56.717 "vhost_controller_set_coalescing", 00:05:56.717 "vhost_get_controllers", 00:05:56.717 "vhost_delete_controller", 00:05:56.717 "vhost_create_blk_controller", 00:05:56.717 "vhost_scsi_controller_remove_target", 00:05:56.717 "vhost_scsi_controller_add_target", 00:05:56.717 "vhost_start_scsi_controller", 00:05:56.717 "vhost_create_scsi_controller", 00:05:56.717 "thread_set_cpumask", 00:05:56.717 "scheduler_set_options", 00:05:56.717 "framework_get_governor", 00:05:56.717 "framework_get_scheduler", 00:05:56.717 "framework_set_scheduler", 00:05:56.717 "framework_get_reactors", 00:05:56.717 "thread_get_io_channels", 00:05:56.717 "thread_get_pollers", 00:05:56.717 "thread_get_stats", 00:05:56.717 "framework_monitor_context_switch", 00:05:56.717 "spdk_kill_instance", 00:05:56.717 "log_enable_timestamps", 00:05:56.717 "log_get_flags", 00:05:56.717 "log_clear_flag", 00:05:56.717 "log_set_flag", 00:05:56.717 "log_get_level", 00:05:56.717 "log_set_level", 00:05:56.717 "log_get_print_level", 00:05:56.717 "log_set_print_level", 00:05:56.717 "framework_enable_cpumask_locks", 00:05:56.717 "framework_disable_cpumask_locks", 00:05:56.717 "framework_wait_init", 00:05:56.717 "framework_start_init", 00:05:56.717 "scsi_get_devices", 00:05:56.717 "bdev_get_histogram", 00:05:56.717 "bdev_enable_histogram", 00:05:56.717 "bdev_set_qos_limit", 00:05:56.717 "bdev_set_qd_sampling_period", 00:05:56.717 "bdev_get_bdevs", 00:05:56.717 "bdev_reset_iostat", 00:05:56.717 "bdev_get_iostat", 00:05:56.717 "bdev_examine", 00:05:56.717 "bdev_wait_for_examine", 00:05:56.717 "bdev_set_options", 00:05:56.717 "accel_get_stats", 00:05:56.717 "accel_set_options", 00:05:56.717 "accel_set_driver", 00:05:56.717 "accel_crypto_key_destroy", 00:05:56.717 "accel_crypto_keys_get", 00:05:56.717 "accel_crypto_key_create", 00:05:56.717 "accel_assign_opc", 00:05:56.717 "accel_get_module_info", 00:05:56.717 "accel_get_opc_assignments", 00:05:56.717 "vmd_rescan", 00:05:56.717 "vmd_remove_device", 00:05:56.717 "vmd_enable", 00:05:56.717 "sock_get_default_impl", 00:05:56.717 "sock_set_default_impl", 00:05:56.717 "sock_impl_set_options", 00:05:56.717 "sock_impl_get_options", 00:05:56.717 "iobuf_get_stats", 00:05:56.717 "iobuf_set_options", 00:05:56.717 "keyring_get_keys", 00:05:56.717 "framework_get_pci_devices", 00:05:56.717 "framework_get_config", 00:05:56.717 "framework_get_subsystems", 00:05:56.717 "fsdev_set_opts", 00:05:56.717 "fsdev_get_opts", 00:05:56.717 "trace_get_info", 00:05:56.717 "trace_get_tpoint_group_mask", 00:05:56.717 "trace_disable_tpoint_group", 00:05:56.717 "trace_enable_tpoint_group", 00:05:56.717 "trace_clear_tpoint_mask", 00:05:56.717 "trace_set_tpoint_mask", 00:05:56.717 "notify_get_notifications", 00:05:56.717 "notify_get_types", 00:05:56.717 "spdk_get_version", 00:05:56.717 "rpc_get_methods" 00:05:56.717 ] 00:05:56.717 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.717 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.717 14:38:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59281 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59281 ']' 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59281 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59281 00:05:56.717 killing process with pid 59281 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59281' 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59281 00:05:56.717 14:38:34 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59281 00:05:58.103 00:05:58.103 real 0m2.987s 00:05:58.103 user 0m5.303s 00:05:58.103 sys 0m0.521s 00:05:58.103 14:38:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.103 ************************************ 00:05:58.103 14:38:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.103 END TEST spdkcli_tcp 00:05:58.103 ************************************ 00:05:58.366 14:38:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.366 14:38:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.366 14:38:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.366 14:38:36 -- common/autotest_common.sh@10 -- # set +x 00:05:58.366 ************************************ 00:05:58.366 START TEST dpdk_mem_utility 00:05:58.366 ************************************ 00:05:58.366 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.366 * Looking for test storage... 00:05:58.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:58.366 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.366 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.366 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:58.366 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:58.366 14:38:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.367 14:38:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.367 14:38:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.367 14:38:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:58.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.367 --rc genhtml_branch_coverage=1 00:05:58.367 --rc genhtml_function_coverage=1 00:05:58.367 --rc genhtml_legend=1 00:05:58.367 --rc geninfo_all_blocks=1 00:05:58.367 --rc geninfo_unexecuted_blocks=1 00:05:58.367 00:05:58.367 ' 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:58.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.367 --rc genhtml_branch_coverage=1 00:05:58.367 --rc genhtml_function_coverage=1 00:05:58.367 --rc genhtml_legend=1 00:05:58.367 --rc geninfo_all_blocks=1 00:05:58.367 --rc geninfo_unexecuted_blocks=1 00:05:58.367 00:05:58.367 ' 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:58.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.367 --rc genhtml_branch_coverage=1 00:05:58.367 --rc genhtml_function_coverage=1 00:05:58.367 --rc genhtml_legend=1 00:05:58.367 --rc geninfo_all_blocks=1 00:05:58.367 --rc geninfo_unexecuted_blocks=1 00:05:58.367 00:05:58.367 ' 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:58.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.367 --rc genhtml_branch_coverage=1 00:05:58.367 --rc genhtml_function_coverage=1 00:05:58.367 --rc genhtml_legend=1 00:05:58.367 --rc geninfo_all_blocks=1 00:05:58.367 --rc geninfo_unexecuted_blocks=1 00:05:58.367 00:05:58.367 ' 00:05:58.367 14:38:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.367 14:38:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59392 00:05:58.367 14:38:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59392 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59392 ']' 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.367 14:38:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.367 14:38:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.367 [2024-12-09 14:38:36.482628] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:05:58.367 [2024-12-09 14:38:36.482754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:05:58.628 [2024-12-09 14:38:36.637070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.628 [2024-12-09 14:38:36.740029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.576 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.576 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:59.576 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.576 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.576 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.576 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.576 { 00:05:59.576 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.576 } 00:05:59.576 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.576 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:59.576 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:59.576 1 heaps totaling size 824.000000 MiB 00:05:59.576 size: 824.000000 MiB heap id: 0 00:05:59.576 end heaps---------- 00:05:59.576 9 mempools totaling size 603.782043 MiB 00:05:59.576 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.576 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.576 size: 100.555481 MiB name: bdev_io_59392 00:05:59.576 size: 50.003479 MiB name: msgpool_59392 00:05:59.576 size: 36.509338 MiB name: fsdev_io_59392 00:05:59.576 size: 21.763794 MiB name: PDU_Pool 00:05:59.576 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.576 size: 4.133484 MiB name: evtpool_59392 00:05:59.576 size: 0.026123 MiB name: Session_Pool 00:05:59.576 end mempools------- 00:05:59.576 6 memzones totaling size 4.142822 MiB 00:05:59.576 size: 1.000366 MiB name: RG_ring_0_59392 00:05:59.576 size: 1.000366 MiB name: RG_ring_1_59392 00:05:59.576 size: 1.000366 MiB name: RG_ring_4_59392 00:05:59.576 size: 1.000366 MiB name: RG_ring_5_59392 00:05:59.576 size: 0.125366 MiB name: RG_ring_2_59392 00:05:59.576 size: 0.015991 MiB name: RG_ring_3_59392 00:05:59.576 end memzones------- 00:05:59.576 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.576 heap id: 0 total size: 824.000000 MiB number of busy elements: 330 number of free elements: 18 00:05:59.576 list of free elements. size: 16.777710 MiB 00:05:59.576 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:59.576 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:59.576 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:59.576 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:59.576 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:59.576 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:59.576 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:59.576 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:59.576 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:59.576 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:59.576 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:59.576 element at address: 0x20001b400000 with size: 0.559021 MiB 00:05:59.576 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:59.576 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:59.576 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:59.576 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:59.576 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:59.576 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:59.576 list of standard malloc elements. size: 199.291382 MiB 00:05:59.576 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:59.576 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:59.576 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:59.576 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:59.576 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:59.576 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:59.576 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:59.576 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:59.576 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:59.576 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:59.576 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:59.576 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:59.576 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:59.577 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:59.578 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:59.578 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:59.578 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:59.579 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:59.579 list of memzone associated elements. size: 607.930908 MiB 00:05:59.579 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:59.579 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.579 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:59.579 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.579 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:59.579 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59392_0 00:05:59.579 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:59.579 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59392_0 00:05:59.579 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:59.579 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59392_0 00:05:59.579 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:59.579 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.579 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:59.579 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.579 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:59.579 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59392_0 00:05:59.579 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:59.579 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59392 00:05:59.579 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:59.579 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59392 00:05:59.579 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:59.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.579 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:59.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.579 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:59.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.579 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:59.579 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.579 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:59.579 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59392 00:05:59.579 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:59.579 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59392 00:05:59.579 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:59.579 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59392 00:05:59.579 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:59.579 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59392 00:05:59.579 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:59.579 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59392 00:05:59.579 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:59.579 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59392 00:05:59.579 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:59.579 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.579 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:59.579 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.579 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:59.579 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.579 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:59.579 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59392 00:05:59.579 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:59.579 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59392 00:05:59.579 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:59.579 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.579 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:59.579 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.579 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:59.579 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59392 00:05:59.579 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:59.579 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.579 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:59.579 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59392 00:05:59.579 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:59.579 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59392 00:05:59.579 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:59.579 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59392 00:05:59.579 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:59.579 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.579 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.579 14:38:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59392 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59392 ']' 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59392 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59392 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.579 killing process with pid 59392 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59392' 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59392 00:05:59.579 14:38:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59392 00:06:00.964 00:06:00.964 real 0m2.483s 00:06:00.964 user 0m2.452s 00:06:00.964 sys 0m0.447s 00:06:00.964 14:38:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.964 ************************************ 00:06:00.964 END TEST dpdk_mem_utility 00:06:00.964 ************************************ 00:06:00.964 14:38:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.964 14:38:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.964 14:38:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.964 14:38:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.964 14:38:38 -- common/autotest_common.sh@10 -- # set +x 00:06:00.964 ************************************ 00:06:00.964 START TEST event 00:06:00.964 ************************************ 00:06:00.964 14:38:38 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.964 * Looking for test storage... 00:06:00.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.964 14:38:38 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.964 14:38:38 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.964 14:38:38 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.964 14:38:38 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.964 14:38:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.964 14:38:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.964 14:38:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.964 14:38:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.964 14:38:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.964 14:38:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.964 14:38:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.964 14:38:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.964 14:38:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.964 14:38:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.964 14:38:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.964 14:38:38 event -- scripts/common.sh@344 -- # case "$op" in 00:06:00.964 14:38:38 event -- scripts/common.sh@345 -- # : 1 00:06:00.964 14:38:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.964 14:38:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.964 14:38:38 event -- scripts/common.sh@365 -- # decimal 1 00:06:00.964 14:38:38 event -- scripts/common.sh@353 -- # local d=1 00:06:00.964 14:38:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.964 14:38:38 event -- scripts/common.sh@355 -- # echo 1 00:06:00.964 14:38:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.964 14:38:38 event -- scripts/common.sh@366 -- # decimal 2 00:06:00.964 14:38:38 event -- scripts/common.sh@353 -- # local d=2 00:06:00.964 14:38:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.964 14:38:38 event -- scripts/common.sh@355 -- # echo 2 00:06:00.964 14:38:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.965 14:38:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.965 14:38:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.965 14:38:38 event -- scripts/common.sh@368 -- # return 0 00:06:00.965 14:38:38 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.965 14:38:38 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.965 --rc genhtml_branch_coverage=1 00:06:00.965 --rc genhtml_function_coverage=1 00:06:00.965 --rc genhtml_legend=1 00:06:00.965 --rc geninfo_all_blocks=1 00:06:00.965 --rc geninfo_unexecuted_blocks=1 00:06:00.965 00:06:00.965 ' 00:06:00.965 14:38:38 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.965 --rc genhtml_branch_coverage=1 00:06:00.965 --rc genhtml_function_coverage=1 00:06:00.965 --rc genhtml_legend=1 00:06:00.965 --rc geninfo_all_blocks=1 00:06:00.965 --rc geninfo_unexecuted_blocks=1 00:06:00.965 00:06:00.965 ' 00:06:00.965 14:38:38 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.965 --rc genhtml_branch_coverage=1 00:06:00.965 --rc genhtml_function_coverage=1 00:06:00.965 --rc genhtml_legend=1 00:06:00.965 --rc geninfo_all_blocks=1 00:06:00.965 --rc geninfo_unexecuted_blocks=1 00:06:00.965 00:06:00.965 ' 00:06:00.965 14:38:38 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.965 --rc genhtml_branch_coverage=1 00:06:00.965 --rc genhtml_function_coverage=1 00:06:00.965 --rc genhtml_legend=1 00:06:00.965 --rc geninfo_all_blocks=1 00:06:00.965 --rc geninfo_unexecuted_blocks=1 00:06:00.965 00:06:00.965 ' 00:06:00.965 14:38:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:00.965 14:38:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.965 14:38:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.965 14:38:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:00.965 14:38:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.965 14:38:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.965 ************************************ 00:06:00.965 START TEST event_perf 00:06:00.965 ************************************ 00:06:00.965 14:38:38 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.965 Running I/O for 1 seconds...[2024-12-09 14:38:38.985371] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:00.965 [2024-12-09 14:38:38.985512] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59484 ] 00:06:01.225 [2024-12-09 14:38:39.147232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.225 [2024-12-09 14:38:39.269285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.225 [2024-12-09 14:38:39.269570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.225 [2024-12-09 14:38:39.269915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.225 Running I/O for 1 seconds...[2024-12-09 14:38:39.269920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.614 00:06:02.614 lcore 0: 160541 00:06:02.614 lcore 1: 160542 00:06:02.614 lcore 2: 160542 00:06:02.614 lcore 3: 160544 00:06:02.614 done. 00:06:02.614 00:06:02.614 real 0m1.502s 00:06:02.614 user 0m4.293s 00:06:02.614 sys 0m0.086s 00:06:02.614 ************************************ 00:06:02.614 END TEST event_perf 00:06:02.614 ************************************ 00:06:02.614 14:38:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.614 14:38:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.614 14:38:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.614 14:38:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:02.614 14:38:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.614 14:38:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.614 ************************************ 00:06:02.614 START TEST event_reactor 00:06:02.614 ************************************ 00:06:02.614 14:38:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.614 [2024-12-09 14:38:40.536589] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:02.614 [2024-12-09 14:38:40.536699] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59523 ] 00:06:02.614 [2024-12-09 14:38:40.698432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.875 [2024-12-09 14:38:40.818310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.270 test_start 00:06:04.270 oneshot 00:06:04.270 tick 100 00:06:04.270 tick 100 00:06:04.270 tick 250 00:06:04.270 tick 100 00:06:04.270 tick 100 00:06:04.270 tick 100 00:06:04.271 tick 250 00:06:04.271 tick 500 00:06:04.271 tick 100 00:06:04.271 tick 100 00:06:04.271 tick 250 00:06:04.271 tick 100 00:06:04.271 tick 100 00:06:04.271 test_end 00:06:04.271 00:06:04.271 real 0m1.482s 00:06:04.271 user 0m1.293s 00:06:04.271 sys 0m0.081s 00:06:04.271 14:38:41 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.271 ************************************ 00:06:04.271 END TEST event_reactor 00:06:04.271 14:38:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:04.271 ************************************ 00:06:04.271 14:38:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.271 14:38:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.271 14:38:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.271 14:38:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.271 ************************************ 00:06:04.271 START TEST event_reactor_perf 00:06:04.271 ************************************ 00:06:04.271 14:38:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.271 [2024-12-09 14:38:42.077078] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:04.271 [2024-12-09 14:38:42.077197] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59560 ] 00:06:04.271 [2024-12-09 14:38:42.239138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.271 [2024-12-09 14:38:42.357947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.688 test_start 00:06:05.688 test_end 00:06:05.688 Performance: 315348 events per second 00:06:05.688 00:06:05.688 real 0m1.479s 00:06:05.688 user 0m1.295s 00:06:05.688 sys 0m0.075s 00:06:05.688 14:38:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.688 14:38:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.688 ************************************ 00:06:05.688 END TEST event_reactor_perf 00:06:05.688 ************************************ 00:06:05.688 14:38:43 event -- event/event.sh@49 -- # uname -s 00:06:05.688 14:38:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:05.688 14:38:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:05.688 14:38:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.688 14:38:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.688 14:38:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.688 ************************************ 00:06:05.688 START TEST event_scheduler 00:06:05.688 ************************************ 00:06:05.688 14:38:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:05.688 * Looking for test storage... 00:06:05.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:05.688 14:38:43 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.688 14:38:43 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.688 14:38:43 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.688 14:38:43 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.688 14:38:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:05.688 14:38:43 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.688 14:38:43 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.689 --rc genhtml_branch_coverage=1 00:06:05.689 --rc genhtml_function_coverage=1 00:06:05.689 --rc genhtml_legend=1 00:06:05.689 --rc geninfo_all_blocks=1 00:06:05.689 --rc geninfo_unexecuted_blocks=1 00:06:05.689 00:06:05.689 ' 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.689 --rc genhtml_branch_coverage=1 00:06:05.689 --rc genhtml_function_coverage=1 00:06:05.689 --rc genhtml_legend=1 00:06:05.689 --rc geninfo_all_blocks=1 00:06:05.689 --rc geninfo_unexecuted_blocks=1 00:06:05.689 00:06:05.689 ' 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.689 --rc genhtml_branch_coverage=1 00:06:05.689 --rc genhtml_function_coverage=1 00:06:05.689 --rc genhtml_legend=1 00:06:05.689 --rc geninfo_all_blocks=1 00:06:05.689 --rc geninfo_unexecuted_blocks=1 00:06:05.689 00:06:05.689 ' 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.689 --rc genhtml_branch_coverage=1 00:06:05.689 --rc genhtml_function_coverage=1 00:06:05.689 --rc genhtml_legend=1 00:06:05.689 --rc geninfo_all_blocks=1 00:06:05.689 --rc geninfo_unexecuted_blocks=1 00:06:05.689 00:06:05.689 ' 00:06:05.689 14:38:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:05.689 14:38:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59630 00:06:05.689 14:38:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.689 14:38:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59630 00:06:05.689 14:38:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59630 ']' 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.689 14:38:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.689 [2024-12-09 14:38:43.802644] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:05.689 [2024-12-09 14:38:43.802777] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:06:05.950 [2024-12-09 14:38:43.962987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.950 [2024-12-09 14:38:44.068180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.950 [2024-12-09 14:38:44.068605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.950 [2024-12-09 14:38:44.068681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.950 [2024-12-09 14:38:44.068698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:06.891 14:38:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:06.891 POWER: Cannot set governor of lcore 0 to userspace 00:06:06.891 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:06.891 POWER: Cannot set governor of lcore 0 to performance 00:06:06.891 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:06.891 POWER: Cannot set governor of lcore 0 to userspace 00:06:06.891 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:06.891 POWER: Cannot set governor of lcore 0 to userspace 00:06:06.891 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:06.891 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:06.891 POWER: Unable to set Power Management Environment for lcore 0 00:06:06.891 [2024-12-09 14:38:44.653955] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:06.891 [2024-12-09 14:38:44.653974] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:06.891 [2024-12-09 14:38:44.653982] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:06.891 [2024-12-09 14:38:44.653997] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:06.891 [2024-12-09 14:38:44.654004] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:06.891 [2024-12-09 14:38:44.654012] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 [2024-12-09 14:38:44.864155] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 ************************************ 00:06:06.891 START TEST scheduler_create_thread 00:06:06.891 ************************************ 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 2 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 3 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 4 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 5 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 6 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 7 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 8 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 9 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 10 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.891 14:38:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.817 14:38:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.817 14:38:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.817 14:38:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.817 14:38:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.817 14:38:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.415 ************************************ 00:06:09.415 END TEST scheduler_create_thread 00:06:09.415 ************************************ 00:06:09.415 14:38:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.415 00:06:09.415 real 0m2.616s 00:06:09.415 user 0m0.018s 00:06:09.415 sys 0m0.004s 00:06:09.415 14:38:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.415 14:38:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.704 14:38:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.704 14:38:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59630 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59630 ']' 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59630 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59630 00:06:09.704 killing process with pid 59630 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59630' 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59630 00:06:09.704 14:38:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59630 00:06:09.964 [2024-12-09 14:38:47.973032] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.907 00:06:10.907 real 0m5.324s 00:06:10.907 user 0m9.342s 00:06:10.907 sys 0m0.368s 00:06:10.907 ************************************ 00:06:10.907 END TEST event_scheduler 00:06:10.907 ************************************ 00:06:10.907 14:38:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.907 14:38:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.907 14:38:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.907 14:38:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.907 14:38:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.907 14:38:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.907 14:38:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.907 ************************************ 00:06:10.907 START TEST app_repeat 00:06:10.907 ************************************ 00:06:10.907 14:38:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59737 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.907 Process app_repeat pid: 59737 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59737' 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.907 spdk_app_start Round 0 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.907 14:38:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59737 /var/tmp/spdk-nbd.sock 00:06:10.907 14:38:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59737 ']' 00:06:10.907 14:38:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.907 14:38:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.907 14:38:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.907 14:38:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.907 14:38:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.168 [2024-12-09 14:38:49.046649] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:11.168 [2024-12-09 14:38:49.047088] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:06:11.168 [2024-12-09 14:38:49.217542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.429 [2024-12-09 14:38:49.387552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.429 [2024-12-09 14:38:49.387573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.000 14:38:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.000 14:38:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.000 14:38:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.260 Malloc0 00:06:12.260 14:38:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.521 Malloc1 00:06:12.521 14:38:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.521 14:38:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.781 /dev/nbd0 00:06:12.781 14:38:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.781 14:38:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.781 14:38:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.781 1+0 records in 00:06:12.782 1+0 records out 00:06:12.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101829 s, 4.0 MB/s 00:06:12.782 14:38:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.782 14:38:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.782 14:38:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.782 14:38:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.782 14:38:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.782 14:38:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.782 14:38:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.782 14:38:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.043 /dev/nbd1 00:06:13.043 14:38:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.043 14:38:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.043 1+0 records in 00:06:13.043 1+0 records out 00:06:13.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644331 s, 6.4 MB/s 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.043 14:38:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.043 14:38:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.043 14:38:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.043 14:38:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.043 14:38:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.043 14:38:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.304 { 00:06:13.304 "nbd_device": "/dev/nbd0", 00:06:13.304 "bdev_name": "Malloc0" 00:06:13.304 }, 00:06:13.304 { 00:06:13.304 "nbd_device": "/dev/nbd1", 00:06:13.304 "bdev_name": "Malloc1" 00:06:13.304 } 00:06:13.304 ]' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.304 { 00:06:13.304 "nbd_device": "/dev/nbd0", 00:06:13.304 "bdev_name": "Malloc0" 00:06:13.304 }, 00:06:13.304 { 00:06:13.304 "nbd_device": "/dev/nbd1", 00:06:13.304 "bdev_name": "Malloc1" 00:06:13.304 } 00:06:13.304 ]' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.304 /dev/nbd1' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.304 /dev/nbd1' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.304 256+0 records in 00:06:13.304 256+0 records out 00:06:13.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488542 s, 215 MB/s 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.304 256+0 records in 00:06:13.304 256+0 records out 00:06:13.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350645 s, 29.9 MB/s 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.304 256+0 records in 00:06:13.304 256+0 records out 00:06:13.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0412244 s, 25.4 MB/s 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.304 14:38:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.569 14:38:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.829 14:38:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.087 14:38:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.088 14:38:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.088 14:38:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.088 14:38:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.349 14:38:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.353 [2024-12-09 14:38:53.311088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.353 [2024-12-09 14:38:53.441186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.353 [2024-12-09 14:38:53.441191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.612 [2024-12-09 14:38:53.582623] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.612 [2024-12-09 14:38:53.582748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.523 spdk_app_start Round 1 00:06:17.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.523 14:38:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.523 14:38:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.523 14:38:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59737 /var/tmp/spdk-nbd.sock 00:06:17.523 14:38:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59737 ']' 00:06:17.523 14:38:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.523 14:38:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.523 14:38:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.523 14:38:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.523 14:38:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.783 14:38:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.783 14:38:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.783 14:38:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.783 Malloc0 00:06:18.043 14:38:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.043 Malloc1 00:06:18.043 14:38:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.043 14:38:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.304 /dev/nbd0 00:06:18.304 14:38:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.304 14:38:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.304 1+0 records in 00:06:18.304 1+0 records out 00:06:18.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399068 s, 10.3 MB/s 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.304 14:38:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.304 14:38:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.304 14:38:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.304 14:38:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.564 /dev/nbd1 00:06:18.564 14:38:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.564 14:38:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.564 1+0 records in 00:06:18.564 1+0 records out 00:06:18.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377758 s, 10.8 MB/s 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.564 14:38:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.564 14:38:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.564 14:38:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.565 14:38:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.565 14:38:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.565 14:38:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.824 { 00:06:18.824 "nbd_device": "/dev/nbd0", 00:06:18.824 "bdev_name": "Malloc0" 00:06:18.824 }, 00:06:18.824 { 00:06:18.824 "nbd_device": "/dev/nbd1", 00:06:18.824 "bdev_name": "Malloc1" 00:06:18.824 } 00:06:18.824 ]' 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.824 { 00:06:18.824 "nbd_device": "/dev/nbd0", 00:06:18.824 "bdev_name": "Malloc0" 00:06:18.824 }, 00:06:18.824 { 00:06:18.824 "nbd_device": "/dev/nbd1", 00:06:18.824 "bdev_name": "Malloc1" 00:06:18.824 } 00:06:18.824 ]' 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.824 /dev/nbd1' 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.824 /dev/nbd1' 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.824 256+0 records in 00:06:18.824 256+0 records out 00:06:18.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0076511 s, 137 MB/s 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.824 256+0 records in 00:06:18.824 256+0 records out 00:06:18.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017867 s, 58.7 MB/s 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.824 14:38:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.084 256+0 records in 00:06:19.084 256+0 records out 00:06:19.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0791325 s, 13.3 MB/s 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.084 14:38:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.084 14:38:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.343 14:38:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.344 14:38:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.604 14:38:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.604 14:38:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.186 14:38:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.764 [2024-12-09 14:38:58.736429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.764 [2024-12-09 14:38:58.821264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.764 [2024-12-09 14:38:58.821385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.026 [2024-12-09 14:38:58.935311] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.026 [2024-12-09 14:38:58.935376] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.933 14:39:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.933 spdk_app_start Round 2 00:06:22.933 14:39:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.933 14:39:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59737 /var/tmp/spdk-nbd.sock 00:06:22.933 14:39:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59737 ']' 00:06:22.933 14:39:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.933 14:39:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.933 14:39:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.933 14:39:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.933 14:39:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.194 14:39:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.194 14:39:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.194 14:39:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.453 Malloc0 00:06:23.453 14:39:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.714 Malloc1 00:06:23.714 14:39:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.714 14:39:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.973 /dev/nbd0 00:06:23.973 14:39:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.973 14:39:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.973 14:39:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.973 14:39:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.974 1+0 records in 00:06:23.974 1+0 records out 00:06:23.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257749 s, 15.9 MB/s 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.974 14:39:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.974 14:39:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.974 14:39:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.974 14:39:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.235 /dev/nbd1 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.235 1+0 records in 00:06:24.235 1+0 records out 00:06:24.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278132 s, 14.7 MB/s 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.235 14:39:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.235 14:39:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.235 { 00:06:24.235 "nbd_device": "/dev/nbd0", 00:06:24.235 "bdev_name": "Malloc0" 00:06:24.235 }, 00:06:24.235 { 00:06:24.235 "nbd_device": "/dev/nbd1", 00:06:24.235 "bdev_name": "Malloc1" 00:06:24.235 } 00:06:24.235 ]' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.496 { 00:06:24.496 "nbd_device": "/dev/nbd0", 00:06:24.496 "bdev_name": "Malloc0" 00:06:24.496 }, 00:06:24.496 { 00:06:24.496 "nbd_device": "/dev/nbd1", 00:06:24.496 "bdev_name": "Malloc1" 00:06:24.496 } 00:06:24.496 ]' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.496 /dev/nbd1' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.496 /dev/nbd1' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.496 256+0 records in 00:06:24.496 256+0 records out 00:06:24.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063181 s, 166 MB/s 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.496 256+0 records in 00:06:24.496 256+0 records out 00:06:24.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171608 s, 61.1 MB/s 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.496 256+0 records in 00:06:24.496 256+0 records out 00:06:24.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185867 s, 56.4 MB/s 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.496 14:39:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.757 14:39:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.019 14:39:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.019 14:39:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.019 14:39:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.019 14:39:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.019 14:39:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.019 14:39:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.019 14:39:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.591 14:39:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.162 [2024-12-09 14:39:04.032994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.162 [2024-12-09 14:39:04.123341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.162 [2024-12-09 14:39:04.123438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.162 [2024-12-09 14:39:04.267385] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.162 [2024-12-09 14:39:04.267466] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.697 14:39:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59737 /var/tmp/spdk-nbd.sock 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59737 ']' 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.697 14:39:06 event.app_repeat -- event/event.sh@39 -- # killprocess 59737 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59737 ']' 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59737 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59737 00:06:28.697 killing process with pid 59737 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59737' 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59737 00:06:28.697 14:39:06 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59737 00:06:29.262 spdk_app_start is called in Round 0. 00:06:29.262 Shutdown signal received, stop current app iteration 00:06:29.262 Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 reinitialization... 00:06:29.262 spdk_app_start is called in Round 1. 00:06:29.262 Shutdown signal received, stop current app iteration 00:06:29.262 Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 reinitialization... 00:06:29.262 spdk_app_start is called in Round 2. 00:06:29.262 Shutdown signal received, stop current app iteration 00:06:29.262 Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 reinitialization... 00:06:29.262 spdk_app_start is called in Round 3. 00:06:29.262 Shutdown signal received, stop current app iteration 00:06:29.262 ************************************ 00:06:29.262 END TEST app_repeat 00:06:29.262 ************************************ 00:06:29.262 14:39:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.262 14:39:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.262 00:06:29.262 real 0m18.255s 00:06:29.262 user 0m39.523s 00:06:29.262 sys 0m2.434s 00:06:29.262 14:39:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.262 14:39:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.262 14:39:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.262 14:39:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.262 14:39:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.262 14:39:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.262 14:39:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.262 ************************************ 00:06:29.262 START TEST cpu_locks 00:06:29.262 ************************************ 00:06:29.262 14:39:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.262 * Looking for test storage... 00:06:29.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.262 14:39:07 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.262 14:39:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.262 14:39:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.520 14:39:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.520 --rc genhtml_branch_coverage=1 00:06:29.520 --rc genhtml_function_coverage=1 00:06:29.520 --rc genhtml_legend=1 00:06:29.520 --rc geninfo_all_blocks=1 00:06:29.520 --rc geninfo_unexecuted_blocks=1 00:06:29.520 00:06:29.520 ' 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.520 --rc genhtml_branch_coverage=1 00:06:29.520 --rc genhtml_function_coverage=1 00:06:29.520 --rc genhtml_legend=1 00:06:29.520 --rc geninfo_all_blocks=1 00:06:29.520 --rc geninfo_unexecuted_blocks=1 00:06:29.520 00:06:29.520 ' 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.520 --rc genhtml_branch_coverage=1 00:06:29.520 --rc genhtml_function_coverage=1 00:06:29.520 --rc genhtml_legend=1 00:06:29.520 --rc geninfo_all_blocks=1 00:06:29.520 --rc geninfo_unexecuted_blocks=1 00:06:29.520 00:06:29.520 ' 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.520 --rc genhtml_branch_coverage=1 00:06:29.520 --rc genhtml_function_coverage=1 00:06:29.520 --rc genhtml_legend=1 00:06:29.520 --rc geninfo_all_blocks=1 00:06:29.520 --rc geninfo_unexecuted_blocks=1 00:06:29.520 00:06:29.520 ' 00:06:29.520 14:39:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.520 14:39:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.520 14:39:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.520 14:39:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.520 14:39:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 ************************************ 00:06:29.520 START TEST default_locks 00:06:29.520 ************************************ 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60178 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60178 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60178 ']' 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.520 14:39:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 [2024-12-09 14:39:07.530153] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:29.520 [2024-12-09 14:39:07.530285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 00:06:29.778 [2024-12-09 14:39:07.692488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.778 [2024-12-09 14:39:07.807742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.342 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.342 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:30.342 14:39:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60178 00:06:30.342 14:39:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60178 00:06:30.342 14:39:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60178 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60178 ']' 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60178 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60178 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60178' 00:06:30.600 killing process with pid 60178 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60178 00:06:30.600 14:39:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60178 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60178 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60178 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60178 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60178 ']' 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.497 ERROR: process (pid: 60178) is no longer running 00:06:32.497 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60178) - No such process 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.497 14:39:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.497 00:06:32.497 real 0m2.848s 00:06:32.497 user 0m2.799s 00:06:32.497 sys 0m0.501s 00:06:32.498 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.498 ************************************ 00:06:32.498 END TEST default_locks 00:06:32.498 ************************************ 00:06:32.498 14:39:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.498 14:39:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:32.498 14:39:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.498 14:39:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.498 14:39:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.498 ************************************ 00:06:32.498 START TEST default_locks_via_rpc 00:06:32.498 ************************************ 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60231 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60231 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60231 ']' 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.498 14:39:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.498 [2024-12-09 14:39:10.430242] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:32.498 [2024-12-09 14:39:10.430382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:06:32.498 [2024-12-09 14:39:10.591996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.755 [2024-12-09 14:39:10.706819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60231 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60231 00:06:33.321 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.578 14:39:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60231 00:06:33.578 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60231 ']' 00:06:33.578 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60231 00:06:33.578 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:33.579 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.579 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60231 00:06:33.579 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.579 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.579 killing process with pid 60231 00:06:33.579 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60231' 00:06:33.579 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60231 00:06:33.579 14:39:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60231 00:06:35.476 00:06:35.476 real 0m2.874s 00:06:35.476 user 0m2.816s 00:06:35.476 sys 0m0.505s 00:06:35.476 14:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.476 14:39:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 ************************************ 00:06:35.477 END TEST default_locks_via_rpc 00:06:35.477 ************************************ 00:06:35.477 14:39:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.477 14:39:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.477 14:39:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.477 14:39:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 ************************************ 00:06:35.477 START TEST non_locking_app_on_locked_coremask 00:06:35.477 ************************************ 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60294 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60294 /var/tmp/spdk.sock 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60294 ']' 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.477 14:39:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 [2024-12-09 14:39:13.342077] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:35.477 [2024-12-09 14:39:13.342212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60294 ] 00:06:35.477 [2024-12-09 14:39:13.503646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.744 [2024-12-09 14:39:13.623231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60310 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60310 /var/tmp/spdk2.sock 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60310 ']' 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.310 14:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:36.310 [2024-12-09 14:39:14.353319] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:36.310 [2024-12-09 14:39:14.353446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:06:36.568 [2024-12-09 14:39:14.527423] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.568 [2024-12-09 14:39:14.527486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.827 [2024-12-09 14:39:14.757043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.198 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.198 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.198 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60294 00:06:38.198 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60294 00:06:38.198 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60294 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60294 ']' 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60294 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60294 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.456 killing process with pid 60294 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60294' 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60294 00:06:38.456 14:39:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60294 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60310 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60310 ']' 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60310 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60310 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.736 killing process with pid 60310 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60310' 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60310 00:06:41.736 14:39:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60310 00:06:43.109 00:06:43.109 real 0m7.765s 00:06:43.109 user 0m7.866s 00:06:43.109 sys 0m0.995s 00:06:43.109 14:39:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.109 14:39:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.109 ************************************ 00:06:43.109 END TEST non_locking_app_on_locked_coremask 00:06:43.109 ************************************ 00:06:43.109 14:39:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.109 14:39:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.109 14:39:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.109 14:39:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.109 ************************************ 00:06:43.109 START TEST locking_app_on_unlocked_coremask 00:06:43.109 ************************************ 00:06:43.109 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:43.109 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60423 00:06:43.109 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60423 /var/tmp/spdk.sock 00:06:43.109 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60423 ']' 00:06:43.110 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.110 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.110 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.110 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.110 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.110 14:39:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.110 [2024-12-09 14:39:21.135550] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:43.110 [2024-12-09 14:39:21.135649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60423 ] 00:06:43.367 [2024-12-09 14:39:21.288704] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.367 [2024-12-09 14:39:21.288752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.367 [2024-12-09 14:39:21.391160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60439 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60439 /var/tmp/spdk2.sock 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60439 ']' 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.934 14:39:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.191 [2024-12-09 14:39:22.076413] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:44.192 [2024-12-09 14:39:22.076538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:06:44.192 [2024-12-09 14:39:22.251131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.449 [2024-12-09 14:39:22.448966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.821 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.821 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.821 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60439 00:06:45.821 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60439 00:06:45.821 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60423 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60423 ']' 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60423 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60423 00:06:46.079 killing process with pid 60423 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60423' 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60423 00:06:46.079 14:39:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60423 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60439 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60439 ']' 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60439 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60439 00:06:49.358 killing process with pid 60439 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60439' 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60439 00:06:49.358 14:39:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60439 00:06:50.332 00:06:50.332 real 0m7.142s 00:06:50.332 user 0m7.393s 00:06:50.332 sys 0m0.867s 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.332 ************************************ 00:06:50.332 END TEST locking_app_on_unlocked_coremask 00:06:50.332 ************************************ 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.332 14:39:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:50.332 14:39:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.332 14:39:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.332 14:39:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.332 ************************************ 00:06:50.332 START TEST locking_app_on_locked_coremask 00:06:50.332 ************************************ 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60541 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60541 /var/tmp/spdk.sock 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60541 ']' 00:06:50.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.332 14:39:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.332 [2024-12-09 14:39:28.330971] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:50.332 [2024-12-09 14:39:28.331094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60541 ] 00:06:50.591 [2024-12-09 14:39:28.491794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.591 [2024-12-09 14:39:28.614701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60557 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60557 /var/tmp/spdk2.sock 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60557 /var/tmp/spdk2.sock 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60557 /var/tmp/spdk2.sock 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60557 ']' 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.524 14:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.524 [2024-12-09 14:39:29.410349] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:51.524 [2024-12-09 14:39:29.410523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60557 ] 00:06:51.524 [2024-12-09 14:39:29.608747] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60541 has claimed it. 00:06:51.524 [2024-12-09 14:39:29.608839] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.090 ERROR: process (pid: 60557) is no longer running 00:06:52.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60557) - No such process 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60541 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60541 00:06:52.090 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60541 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60541 ']' 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60541 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60541 00:06:52.348 killing process with pid 60541 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60541' 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60541 00:06:52.348 14:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60541 00:06:53.721 ************************************ 00:06:53.721 END TEST locking_app_on_locked_coremask 00:06:53.721 ************************************ 00:06:53.721 00:06:53.721 real 0m3.480s 00:06:53.721 user 0m3.664s 00:06:53.721 sys 0m0.649s 00:06:53.721 14:39:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.721 14:39:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.721 14:39:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.721 14:39:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.721 14:39:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.721 14:39:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.721 ************************************ 00:06:53.721 START TEST locking_overlapped_coremask 00:06:53.721 ************************************ 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60610 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60610 /var/tmp/spdk.sock 00:06:53.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60610 ']' 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.721 14:39:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.979 [2024-12-09 14:39:31.849349] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:53.979 [2024-12-09 14:39:31.849467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:06:53.979 [2024-12-09 14:39:32.008360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.239 [2024-12-09 14:39:32.110274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.239 [2024-12-09 14:39:32.110813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.239 [2024-12-09 14:39:32.110925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60628 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60628 /var/tmp/spdk2.sock 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60628 /var/tmp/spdk2.sock 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:54.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60628 /var/tmp/spdk2.sock 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60628 ']' 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.811 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.812 14:39:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.812 [2024-12-09 14:39:32.786299] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:54.812 [2024-12-09 14:39:32.786410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60628 ] 00:06:55.070 [2024-12-09 14:39:32.960559] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60610 has claimed it. 00:06:55.070 [2024-12-09 14:39:32.963827] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.328 ERROR: process (pid: 60628) is no longer running 00:06:55.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60628) - No such process 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60610 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60610 ']' 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60610 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60610 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60610' 00:06:55.328 killing process with pid 60610 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60610 00:06:55.328 14:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60610 00:06:57.241 00:06:57.241 real 0m3.271s 00:06:57.241 user 0m8.848s 00:06:57.241 sys 0m0.421s 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.241 ************************************ 00:06:57.241 END TEST locking_overlapped_coremask 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.241 ************************************ 00:06:57.241 14:39:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:57.241 14:39:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.241 14:39:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.241 14:39:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.241 ************************************ 00:06:57.241 START TEST locking_overlapped_coremask_via_rpc 00:06:57.241 ************************************ 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60687 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60687 /var/tmp/spdk.sock 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60687 ']' 00:06:57.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.241 14:39:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.241 [2024-12-09 14:39:35.200857] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:57.242 [2024-12-09 14:39:35.201573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60687 ] 00:06:57.242 [2024-12-09 14:39:35.359310] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.242 [2024-12-09 14:39:35.359506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.502 [2024-12-09 14:39:35.491638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.502 [2024-12-09 14:39:35.492432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.502 [2024-12-09 14:39:35.492534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60705 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60705 /var/tmp/spdk2.sock 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60705 ']' 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.446 14:39:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.446 [2024-12-09 14:39:36.464841] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:06:58.446 [2024-12-09 14:39:36.465588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60705 ] 00:06:58.707 [2024-12-09 14:39:36.680349] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.707 [2024-12-09 14:39:36.680621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.968 [2024-12-09 14:39:36.975261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.968 [2024-12-09 14:39:36.975472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.968 [2024-12-09 14:39:36.975642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:00.874 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.875 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.875 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.875 14:39:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.875 [2024-12-09 14:39:38.993970] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60687 has claimed it. 00:07:01.133 request: 00:07:01.133 { 00:07:01.133 "method": "framework_enable_cpumask_locks", 00:07:01.133 "req_id": 1 00:07:01.133 } 00:07:01.133 Got JSON-RPC error response 00:07:01.133 response: 00:07:01.133 { 00:07:01.133 "code": -32603, 00:07:01.133 "message": "Failed to claim CPU core: 2" 00:07:01.133 } 00:07:01.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60687 /var/tmp/spdk.sock 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60687 ']' 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60705 /var/tmp/spdk2.sock 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60705 ']' 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.133 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.391 00:07:01.391 real 0m4.310s 00:07:01.391 user 0m1.278s 00:07:01.391 sys 0m0.183s 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.391 14:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.391 ************************************ 00:07:01.391 END TEST locking_overlapped_coremask_via_rpc 00:07:01.391 ************************************ 00:07:01.391 14:39:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:01.391 14:39:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60687 ]] 00:07:01.392 14:39:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60687 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60687 ']' 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60687 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60687 00:07:01.392 killing process with pid 60687 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60687' 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60687 00:07:01.392 14:39:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60687 00:07:03.292 14:39:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60705 ]] 00:07:03.292 14:39:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60705 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60705 ']' 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60705 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60705 00:07:03.292 killing process with pid 60705 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60705' 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60705 00:07:03.292 14:39:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60705 00:07:04.287 14:39:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.287 Process with pid 60687 is not found 00:07:04.287 14:39:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:04.287 14:39:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60687 ]] 00:07:04.287 14:39:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60687 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60687 ']' 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60687 00:07:04.287 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60687) - No such process 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60687 is not found' 00:07:04.287 14:39:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60705 ]] 00:07:04.287 14:39:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60705 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60705 ']' 00:07:04.287 Process with pid 60705 is not found 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60705 00:07:04.287 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60705) - No such process 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60705 is not found' 00:07:04.287 14:39:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.287 ************************************ 00:07:04.287 END TEST cpu_locks 00:07:04.287 ************************************ 00:07:04.287 00:07:04.287 real 0m34.956s 00:07:04.287 user 1m2.109s 00:07:04.287 sys 0m5.288s 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.287 14:39:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.287 ************************************ 00:07:04.287 END TEST event 00:07:04.287 ************************************ 00:07:04.287 00:07:04.287 real 1m3.484s 00:07:04.287 user 1m58.022s 00:07:04.287 sys 0m8.582s 00:07:04.287 14:39:42 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.287 14:39:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.287 14:39:42 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.287 14:39:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.287 14:39:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.287 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:07:04.287 ************************************ 00:07:04.287 START TEST thread 00:07:04.287 ************************************ 00:07:04.287 14:39:42 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.287 * Looking for test storage... 00:07:04.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:04.287 14:39:42 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.287 14:39:42 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.287 14:39:42 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.548 14:39:42 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.548 14:39:42 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.548 14:39:42 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.548 14:39:42 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.548 14:39:42 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.548 14:39:42 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.548 14:39:42 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.548 14:39:42 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.548 14:39:42 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.548 14:39:42 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.548 14:39:42 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.548 14:39:42 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:04.548 14:39:42 thread -- scripts/common.sh@345 -- # : 1 00:07:04.548 14:39:42 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.548 14:39:42 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.548 14:39:42 thread -- scripts/common.sh@365 -- # decimal 1 00:07:04.548 14:39:42 thread -- scripts/common.sh@353 -- # local d=1 00:07:04.548 14:39:42 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.548 14:39:42 thread -- scripts/common.sh@355 -- # echo 1 00:07:04.548 14:39:42 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.548 14:39:42 thread -- scripts/common.sh@366 -- # decimal 2 00:07:04.548 14:39:42 thread -- scripts/common.sh@353 -- # local d=2 00:07:04.548 14:39:42 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.548 14:39:42 thread -- scripts/common.sh@355 -- # echo 2 00:07:04.548 14:39:42 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.548 14:39:42 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.548 14:39:42 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.548 14:39:42 thread -- scripts/common.sh@368 -- # return 0 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.548 --rc genhtml_branch_coverage=1 00:07:04.548 --rc genhtml_function_coverage=1 00:07:04.548 --rc genhtml_legend=1 00:07:04.548 --rc geninfo_all_blocks=1 00:07:04.548 --rc geninfo_unexecuted_blocks=1 00:07:04.548 00:07:04.548 ' 00:07:04.548 14:39:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.548 14:39:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.548 ************************************ 00:07:04.548 START TEST thread_poller_perf 00:07:04.548 ************************************ 00:07:04.548 14:39:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.548 [2024-12-09 14:39:42.493721] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:04.548 [2024-12-09 14:39:42.494413] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:07:04.548 [2024-12-09 14:39:42.653944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.808 [2024-12-09 14:39:42.755520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.808 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:06.192 [2024-12-09T14:39:44.314Z] ====================================== 00:07:06.192 [2024-12-09T14:39:44.314Z] busy:2614045720 (cyc) 00:07:06.192 [2024-12-09T14:39:44.314Z] total_run_count: 304000 00:07:06.192 [2024-12-09T14:39:44.314Z] tsc_hz: 2600000000 (cyc) 00:07:06.192 [2024-12-09T14:39:44.314Z] ====================================== 00:07:06.192 [2024-12-09T14:39:44.314Z] poller_cost: 8598 (cyc), 3306 (nsec) 00:07:06.192 00:07:06.192 real 0m1.465s 00:07:06.192 user 0m1.280s 00:07:06.192 sys 0m0.075s 00:07:06.192 14:39:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.192 14:39:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.192 ************************************ 00:07:06.192 END TEST thread_poller_perf 00:07:06.192 ************************************ 00:07:06.192 14:39:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:06.192 14:39:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:06.192 14:39:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.192 14:39:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.192 ************************************ 00:07:06.192 START TEST thread_poller_perf 00:07:06.192 ************************************ 00:07:06.192 14:39:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:06.192 [2024-12-09 14:39:44.020886] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:06.192 [2024-12-09 14:39:44.021099] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60914 ] 00:07:06.192 [2024-12-09 14:39:44.178166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.192 [2024-12-09 14:39:44.280402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.192 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:07.563 [2024-12-09T14:39:45.685Z] ====================================== 00:07:07.563 [2024-12-09T14:39:45.685Z] busy:2603349056 (cyc) 00:07:07.563 [2024-12-09T14:39:45.685Z] total_run_count: 3614000 00:07:07.563 [2024-12-09T14:39:45.685Z] tsc_hz: 2600000000 (cyc) 00:07:07.563 [2024-12-09T14:39:45.685Z] ====================================== 00:07:07.563 [2024-12-09T14:39:45.685Z] poller_cost: 720 (cyc), 276 (nsec) 00:07:07.563 ************************************ 00:07:07.563 END TEST thread_poller_perf 00:07:07.563 ************************************ 00:07:07.563 00:07:07.563 real 0m1.453s 00:07:07.563 user 0m1.277s 00:07:07.563 sys 0m0.066s 00:07:07.563 14:39:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.563 14:39:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.563 14:39:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:07.563 00:07:07.563 real 0m3.168s 00:07:07.563 user 0m2.670s 00:07:07.563 sys 0m0.259s 00:07:07.563 ************************************ 00:07:07.563 END TEST thread 00:07:07.563 ************************************ 00:07:07.563 14:39:45 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.563 14:39:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.563 14:39:45 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:07.563 14:39:45 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:07.563 14:39:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.563 14:39:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.563 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.563 ************************************ 00:07:07.563 START TEST app_cmdline 00:07:07.563 ************************************ 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:07.563 * Looking for test storage... 00:07:07.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:07.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.563 14:39:45 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.563 --rc genhtml_branch_coverage=1 00:07:07.563 --rc genhtml_function_coverage=1 00:07:07.563 --rc genhtml_legend=1 00:07:07.563 --rc geninfo_all_blocks=1 00:07:07.563 --rc geninfo_unexecuted_blocks=1 00:07:07.563 00:07:07.563 ' 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.563 --rc genhtml_branch_coverage=1 00:07:07.563 --rc genhtml_function_coverage=1 00:07:07.563 --rc genhtml_legend=1 00:07:07.563 --rc geninfo_all_blocks=1 00:07:07.563 --rc geninfo_unexecuted_blocks=1 00:07:07.563 00:07:07.563 ' 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.563 --rc genhtml_branch_coverage=1 00:07:07.563 --rc genhtml_function_coverage=1 00:07:07.563 --rc genhtml_legend=1 00:07:07.563 --rc geninfo_all_blocks=1 00:07:07.563 --rc geninfo_unexecuted_blocks=1 00:07:07.563 00:07:07.563 ' 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.563 --rc genhtml_branch_coverage=1 00:07:07.563 --rc genhtml_function_coverage=1 00:07:07.563 --rc genhtml_legend=1 00:07:07.563 --rc geninfo_all_blocks=1 00:07:07.563 --rc geninfo_unexecuted_blocks=1 00:07:07.563 00:07:07.563 ' 00:07:07.563 14:39:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:07.563 14:39:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60998 00:07:07.563 14:39:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60998 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60998 ']' 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.563 14:39:45 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:07.563 14:39:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 [2024-12-09 14:39:45.728647] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:07.821 [2024-12-09 14:39:45.728765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:07:07.821 [2024-12-09 14:39:45.891130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.078 [2024-12-09 14:39:45.988880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.643 14:39:46 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.643 14:39:46 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:08.643 14:39:46 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:08.901 { 00:07:08.901 "version": "SPDK v25.01-pre git sha1 805149865", 00:07:08.901 "fields": { 00:07:08.901 "major": 25, 00:07:08.901 "minor": 1, 00:07:08.901 "patch": 0, 00:07:08.901 "suffix": "-pre", 00:07:08.901 "commit": "805149865" 00:07:08.901 } 00:07:08.901 } 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:08.901 14:39:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:08.901 14:39:46 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.158 request: 00:07:09.158 { 00:07:09.158 "method": "env_dpdk_get_mem_stats", 00:07:09.158 "req_id": 1 00:07:09.158 } 00:07:09.158 Got JSON-RPC error response 00:07:09.158 response: 00:07:09.158 { 00:07:09.158 "code": -32601, 00:07:09.158 "message": "Method not found" 00:07:09.158 } 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.158 14:39:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60998 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60998 ']' 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60998 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60998 00:07:09.158 killing process with pid 60998 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60998' 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 60998 00:07:09.158 14:39:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 60998 00:07:10.529 ************************************ 00:07:10.529 END TEST app_cmdline 00:07:10.529 ************************************ 00:07:10.529 00:07:10.529 real 0m3.101s 00:07:10.529 user 0m3.491s 00:07:10.529 sys 0m0.421s 00:07:10.529 14:39:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.529 14:39:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.529 14:39:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:10.529 14:39:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.529 14:39:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.529 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:07:10.787 ************************************ 00:07:10.787 START TEST version 00:07:10.787 ************************************ 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:10.787 * Looking for test storage... 00:07:10.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.787 14:39:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.787 14:39:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.787 14:39:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.787 14:39:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.787 14:39:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.787 14:39:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.787 14:39:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.787 14:39:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.787 14:39:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.787 14:39:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.787 14:39:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.787 14:39:48 version -- scripts/common.sh@344 -- # case "$op" in 00:07:10.787 14:39:48 version -- scripts/common.sh@345 -- # : 1 00:07:10.787 14:39:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.787 14:39:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.787 14:39:48 version -- scripts/common.sh@365 -- # decimal 1 00:07:10.787 14:39:48 version -- scripts/common.sh@353 -- # local d=1 00:07:10.787 14:39:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.787 14:39:48 version -- scripts/common.sh@355 -- # echo 1 00:07:10.787 14:39:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.787 14:39:48 version -- scripts/common.sh@366 -- # decimal 2 00:07:10.787 14:39:48 version -- scripts/common.sh@353 -- # local d=2 00:07:10.787 14:39:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.787 14:39:48 version -- scripts/common.sh@355 -- # echo 2 00:07:10.787 14:39:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.787 14:39:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.787 14:39:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.787 14:39:48 version -- scripts/common.sh@368 -- # return 0 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.787 --rc genhtml_branch_coverage=1 00:07:10.787 --rc genhtml_function_coverage=1 00:07:10.787 --rc genhtml_legend=1 00:07:10.787 --rc geninfo_all_blocks=1 00:07:10.787 --rc geninfo_unexecuted_blocks=1 00:07:10.787 00:07:10.787 ' 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.787 --rc genhtml_branch_coverage=1 00:07:10.787 --rc genhtml_function_coverage=1 00:07:10.787 --rc genhtml_legend=1 00:07:10.787 --rc geninfo_all_blocks=1 00:07:10.787 --rc geninfo_unexecuted_blocks=1 00:07:10.787 00:07:10.787 ' 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.787 --rc genhtml_branch_coverage=1 00:07:10.787 --rc genhtml_function_coverage=1 00:07:10.787 --rc genhtml_legend=1 00:07:10.787 --rc geninfo_all_blocks=1 00:07:10.787 --rc geninfo_unexecuted_blocks=1 00:07:10.787 00:07:10.787 ' 00:07:10.787 14:39:48 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.787 --rc genhtml_branch_coverage=1 00:07:10.787 --rc genhtml_function_coverage=1 00:07:10.787 --rc genhtml_legend=1 00:07:10.787 --rc geninfo_all_blocks=1 00:07:10.787 --rc geninfo_unexecuted_blocks=1 00:07:10.787 00:07:10.787 ' 00:07:10.787 14:39:48 version -- app/version.sh@17 -- # get_header_version major 00:07:10.787 14:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.787 14:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:10.787 14:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.787 14:39:48 version -- app/version.sh@17 -- # major=25 00:07:10.787 14:39:48 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.787 14:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.787 14:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.787 14:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:10.787 14:39:48 version -- app/version.sh@18 -- # minor=1 00:07:10.787 14:39:48 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.787 14:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.787 14:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:10.787 14:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.787 14:39:48 version -- app/version.sh@19 -- # patch=0 00:07:10.787 14:39:48 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.788 14:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.788 14:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:10.788 14:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.788 14:39:48 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.788 14:39:48 version -- app/version.sh@22 -- # version=25.1 00:07:10.788 14:39:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.788 14:39:48 version -- app/version.sh@28 -- # version=25.1rc0 00:07:10.788 14:39:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:10.788 14:39:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.788 14:39:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:10.788 14:39:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:10.788 00:07:10.788 real 0m0.187s 00:07:10.788 user 0m0.124s 00:07:10.788 sys 0m0.093s 00:07:10.788 ************************************ 00:07:10.788 END TEST version 00:07:10.788 ************************************ 00:07:10.788 14:39:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.788 14:39:48 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.788 14:39:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:10.788 14:39:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:10.788 14:39:48 -- spdk/autotest.sh@194 -- # uname -s 00:07:10.788 14:39:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:10.788 14:39:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.788 14:39:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.788 14:39:48 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:10.788 14:39:48 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:10.788 14:39:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.788 14:39:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.788 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:07:10.788 ************************************ 00:07:10.788 START TEST blockdev_nvme 00:07:10.788 ************************************ 00:07:10.788 14:39:48 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:11.045 * Looking for test storage... 00:07:11.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:11.045 14:39:48 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.045 14:39:48 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.045 14:39:48 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.045 14:39:49 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.045 14:39:49 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:11.045 14:39:49 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.045 14:39:49 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.045 --rc genhtml_branch_coverage=1 00:07:11.045 --rc genhtml_function_coverage=1 00:07:11.045 --rc genhtml_legend=1 00:07:11.045 --rc geninfo_all_blocks=1 00:07:11.045 --rc geninfo_unexecuted_blocks=1 00:07:11.045 00:07:11.045 ' 00:07:11.045 14:39:49 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.045 --rc genhtml_branch_coverage=1 00:07:11.045 --rc genhtml_function_coverage=1 00:07:11.045 --rc genhtml_legend=1 00:07:11.045 --rc geninfo_all_blocks=1 00:07:11.045 --rc geninfo_unexecuted_blocks=1 00:07:11.045 00:07:11.045 ' 00:07:11.045 14:39:49 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.045 --rc genhtml_branch_coverage=1 00:07:11.045 --rc genhtml_function_coverage=1 00:07:11.045 --rc genhtml_legend=1 00:07:11.045 --rc geninfo_all_blocks=1 00:07:11.045 --rc geninfo_unexecuted_blocks=1 00:07:11.045 00:07:11.045 ' 00:07:11.045 14:39:49 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.045 --rc genhtml_branch_coverage=1 00:07:11.045 --rc genhtml_function_coverage=1 00:07:11.045 --rc genhtml_legend=1 00:07:11.045 --rc geninfo_all_blocks=1 00:07:11.045 --rc geninfo_unexecuted_blocks=1 00:07:11.045 00:07:11.045 ' 00:07:11.045 14:39:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:11.045 14:39:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:11.045 14:39:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61175 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61175 00:07:11.046 14:39:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:11.046 14:39:49 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61175 ']' 00:07:11.046 14:39:49 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.046 14:39:49 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.046 14:39:49 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.046 14:39:49 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.046 14:39:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.046 [2024-12-09 14:39:49.113392] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:11.046 [2024-12-09 14:39:49.113674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61175 ] 00:07:11.303 [2024-12-09 14:39:49.269727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.303 [2024-12-09 14:39:49.378155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:12.235 14:39:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.235 14:39:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.498 14:39:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.498 14:39:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.498 14:39:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:12.498 14:39:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:12.498 14:39:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.498 14:39:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.498 14:39:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:12.498 14:39:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:12.499 14:39:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "90ae52d2-65c3-45a7-922d-880f775a3e0e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "90ae52d2-65c3-45a7-922d-880f775a3e0e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f4a94aa9-3ba9-4b7d-974a-e631808753ee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f4a94aa9-3ba9-4b7d-974a-e631808753ee",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f3c29744-2e1c-4859-a78e-27f87c07b643"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f3c29744-2e1c-4859-a78e-27f87c07b643",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d9972a92-a839-453e-b53e-23d569697821"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9972a92-a839-453e-b53e-23d569697821",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b3e38096-5419-4bec-bbc1-5b59e6d33745"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3e38096-5419-4bec-bbc1-5b59e6d33745",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "522f73b5-d7cf-4bb5-adad-ed6ab565df0f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "522f73b5-d7cf-4bb5-adad-ed6ab565df0f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:12.499 14:39:50 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:12.499 14:39:50 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:12.499 14:39:50 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:12.499 14:39:50 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61175 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61175 ']' 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61175 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61175 00:07:12.499 killing process with pid 61175 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61175' 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61175 00:07:12.499 14:39:50 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61175 00:07:14.434 14:39:52 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.434 14:39:52 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.434 14:39:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:14.434 14:39:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.434 14:39:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.434 ************************************ 00:07:14.434 START TEST bdev_hello_world 00:07:14.434 ************************************ 00:07:14.434 14:39:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.434 [2024-12-09 14:39:52.149586] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:14.434 [2024-12-09 14:39:52.149989] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61259 ] 00:07:14.434 [2024-12-09 14:39:52.313001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.434 [2024-12-09 14:39:52.441196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.997 [2024-12-09 14:39:53.010564] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:14.997 [2024-12-09 14:39:53.010741] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:14.997 [2024-12-09 14:39:53.010768] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:14.997 [2024-12-09 14:39:53.013397] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:14.997 [2024-12-09 14:39:53.013770] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:14.997 [2024-12-09 14:39:53.013891] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:14.997 [2024-12-09 14:39:53.014128] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:14.997 00:07:14.997 [2024-12-09 14:39:53.014152] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:15.928 ************************************ 00:07:15.928 END TEST bdev_hello_world 00:07:15.928 ************************************ 00:07:15.928 00:07:15.928 real 0m1.731s 00:07:15.928 user 0m1.418s 00:07:15.928 sys 0m0.206s 00:07:15.928 14:39:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.928 14:39:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:15.928 14:39:53 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:15.928 14:39:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.928 14:39:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.928 14:39:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.928 ************************************ 00:07:15.928 START TEST bdev_bounds 00:07:15.928 ************************************ 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61296 00:07:15.928 Process bdevio pid: 61296 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61296' 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61296 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61296 ']' 00:07:15.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.928 14:39:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:15.928 [2024-12-09 14:39:53.908448] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:15.928 [2024-12-09 14:39:53.908582] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61296 ] 00:07:16.186 [2024-12-09 14:39:54.069014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.186 [2024-12-09 14:39:54.184591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.186 [2024-12-09 14:39:54.185021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.186 [2024-12-09 14:39:54.185023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.752 14:39:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.752 14:39:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:16.752 14:39:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:17.009 I/O targets: 00:07:17.009 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:17.009 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:17.009 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.009 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.009 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.009 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:17.009 00:07:17.009 00:07:17.009 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.009 http://cunit.sourceforge.net/ 00:07:17.009 00:07:17.009 00:07:17.009 Suite: bdevio tests on: Nvme3n1 00:07:17.009 Test: blockdev write read block ...passed 00:07:17.009 Test: blockdev write zeroes read block ...passed 00:07:17.009 Test: blockdev write zeroes read no split ...passed 00:07:17.009 Test: blockdev write zeroes read split ...passed 00:07:17.009 Test: blockdev write zeroes read split partial ...passed 00:07:17.009 Test: blockdev reset ...[2024-12-09 14:39:55.098020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:17.009 [2024-12-09 14:39:55.103630] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:17.009 passed 00:07:17.009 Test: blockdev write read 8 blocks ...passed 00:07:17.009 Test: blockdev write read size > 128k ...passed 00:07:17.009 Test: blockdev write read invalid size ...passed 00:07:17.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.009 Test: blockdev write read max offset ...passed 00:07:17.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.009 Test: blockdev writev readv 8 blocks ...passed 00:07:17.009 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.009 Test: blockdev writev readv block ...passed 00:07:17.009 Test: blockdev writev readv size > 128k ...passed 00:07:17.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.010 Test: blockdev comparev and writev ...[2024-12-09 14:39:55.112931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad20a000 len:0x1000 00:07:17.010 [2024-12-09 14:39:55.113164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.010 passed 00:07:17.010 Test: blockdev nvme passthru rw ...passed 00:07:17.010 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:39:55.114148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.010 [2024-12-09 14:39:55.114329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:07:17.010 00:07:17.010 Test: blockdev nvme admin passthru ...passed 00:07:17.010 Test: blockdev copy ...passed 00:07:17.010 Suite: bdevio tests on: Nvme2n3 00:07:17.010 Test: blockdev write read block ...passed 00:07:17.268 Test: blockdev write zeroes read block ...passed 00:07:17.268 Test: blockdev write zeroes read no split ...passed 00:07:17.268 Test: blockdev write zeroes read split ...passed 00:07:17.268 Test: blockdev write zeroes read split partial ...passed 00:07:17.268 Test: blockdev reset ...[2024-12-09 14:39:55.248923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.268 [2024-12-09 14:39:55.252697] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.268 passed 00:07:17.268 Test: blockdev write read 8 blocks ...passed 00:07:17.268 Test: blockdev write read size > 128k ...passed 00:07:17.268 Test: blockdev write read invalid size ...passed 00:07:17.268 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.268 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.268 Test: blockdev write read max offset ...passed 00:07:17.268 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.268 Test: blockdev writev readv 8 blocks ...passed 00:07:17.268 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.268 Test: blockdev writev readv block ...passed 00:07:17.268 Test: blockdev writev readv size > 128k ...passed 00:07:17.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.268 Test: blockdev comparev and writev ...[2024-12-09 14:39:55.260116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28fc06000 len:0x1000 00:07:17.268 [2024-12-09 14:39:55.260342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.268 passed 00:07:17.268 Test: blockdev nvme passthru rw ...passed 00:07:17.268 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:39:55.261423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.268 passed[2024-12-09 14:39:55.261546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.268 00:07:17.268 Test: blockdev nvme admin passthru ...passed 00:07:17.268 Test: blockdev copy ...passed 00:07:17.268 Suite: bdevio tests on: Nvme2n2 00:07:17.268 Test: blockdev write read block ...passed 00:07:17.268 Test: blockdev write zeroes read block ...passed 00:07:17.268 Test: blockdev write zeroes read no split ...passed 00:07:17.268 Test: blockdev write zeroes read split ...passed 00:07:17.268 Test: blockdev write zeroes read split partial ...passed 00:07:17.268 Test: blockdev reset ...[2024-12-09 14:39:55.388499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.527 [2024-12-09 14:39:55.391456] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.527 passed 00:07:17.527 Test: blockdev write read 8 blocks ...passed 00:07:17.527 Test: blockdev write read size > 128k ...passed 00:07:17.527 Test: blockdev write read invalid size ...passed 00:07:17.527 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.527 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.527 Test: blockdev write read max offset ...passed 00:07:17.527 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.527 Test: blockdev writev readv 8 blocks ...passed 00:07:17.527 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.527 Test: blockdev writev readv block ...passed 00:07:17.527 Test: blockdev writev readv size > 128k ...passed 00:07:17.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.527 Test: blockdev comparev and writev ...[2024-12-09 14:39:55.398165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf43c000 len:0x1000 00:07:17.527 [2024-12-09 14:39:55.398205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.527 passed 00:07:17.527 Test: blockdev nvme passthru rw ...passed 00:07:17.527 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.527 Test: blockdev nvme admin passthru ...[2024-12-09 14:39:55.398811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.527 [2024-12-09 14:39:55.398839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.527 passed 00:07:17.527 Test: blockdev copy ...passed 00:07:17.527 Suite: bdevio tests on: Nvme2n1 00:07:17.527 Test: blockdev write read block ...passed 00:07:17.527 Test: blockdev write zeroes read block ...passed 00:07:17.527 Test: blockdev write zeroes read no split ...passed 00:07:17.527 Test: blockdev write zeroes read split ...passed 00:07:17.527 Test: blockdev write zeroes read split partial ...passed 00:07:17.527 Test: blockdev reset ...[2024-12-09 14:39:55.586878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.527 [2024-12-09 14:39:55.592490] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.527 passed 00:07:17.527 Test: blockdev write read 8 blocks ...passed 00:07:17.527 Test: blockdev write read size > 128k ...passed 00:07:17.527 Test: blockdev write read invalid size ...passed 00:07:17.527 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.527 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.527 Test: blockdev write read max offset ...passed 00:07:17.527 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.527 Test: blockdev writev readv 8 blocks ...passed 00:07:17.527 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.527 Test: blockdev writev readv block ...passed 00:07:17.527 Test: blockdev writev readv size > 128k ...passed 00:07:17.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.527 Test: blockdev comparev and writev ...[2024-12-09 14:39:55.601355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf438000 len:0x1000 00:07:17.527 [2024-12-09 14:39:55.601558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.527 passed 00:07:17.527 Test: blockdev nvme passthru rw ...passed 00:07:17.527 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:39:55.602319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.527 [2024-12-09 14:39:55.602396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.527 passed 00:07:17.527 Test: blockdev nvme admin passthru ...passed 00:07:17.527 Test: blockdev copy ...passed 00:07:17.527 Suite: bdevio tests on: Nvme1n1 00:07:17.527 Test: blockdev write read block ...passed 00:07:17.785 Test: blockdev write zeroes read block ...passed 00:07:17.785 Test: blockdev write zeroes read no split ...passed 00:07:17.785 Test: blockdev write zeroes read split ...passed 00:07:17.785 Test: blockdev write zeroes read split partial ...passed 00:07:17.785 Test: blockdev reset ...[2024-12-09 14:39:55.693532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:17.785 [2024-12-09 14:39:55.697713] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:17.785 passed 00:07:17.785 Test: blockdev write read 8 blocks ...passed 00:07:17.785 Test: blockdev write read size > 128k ...passed 00:07:17.785 Test: blockdev write read invalid size ...passed 00:07:17.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.785 Test: blockdev write read max offset ...passed 00:07:17.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.785 Test: blockdev writev readv 8 blocks ...passed 00:07:17.785 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.785 Test: blockdev writev readv block ...passed 00:07:17.785 Test: blockdev writev readv size > 128k ...passed 00:07:17.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.785 Test: blockdev comparev and writev ...[2024-12-09 14:39:55.704307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf434000 len:0x1000 00:07:17.785 passed 00:07:17.785 Test: blockdev nvme passthru rw ...passed 00:07:17.785 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:39:55.704496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.785 [2024-12-09 14:39:55.705036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.785 [2024-12-09 14:39:55.705111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.785 passed 00:07:17.785 Test: blockdev nvme admin passthru ...passed 00:07:17.785 Test: blockdev copy ...passed 00:07:17.785 Suite: bdevio tests on: Nvme0n1 00:07:17.785 Test: blockdev write read block ...passed 00:07:17.785 Test: blockdev write zeroes read block ...passed 00:07:17.785 Test: blockdev write zeroes read no split ...passed 00:07:17.785 Test: blockdev write zeroes read split ...passed 00:07:17.785 Test: blockdev write zeroes read split partial ...passed 00:07:17.785 Test: blockdev reset ...[2024-12-09 14:39:55.853627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:17.785 [2024-12-09 14:39:55.857529] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:17.785 passed 00:07:17.785 Test: blockdev write read 8 blocks ...passed 00:07:17.785 Test: blockdev write read size > 128k ...passed 00:07:17.785 Test: blockdev write read invalid size ...passed 00:07:17.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.785 Test: blockdev write read max offset ...passed 00:07:17.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.785 Test: blockdev writev readv 8 blocks ...passed 00:07:17.785 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.785 Test: blockdev writev readv block ...passed 00:07:17.785 Test: blockdev writev readv size > 128k ...passed 00:07:17.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.785 Test: blockdev comparev and writev ...[2024-12-09 14:39:55.864918] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassed 00:07:17.785 Test: blockdev nvme passthru rw ...ince it has 00:07:17.785 separate metadata which is not supported yet. 00:07:17.785 passed 00:07:17.785 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.785 Test: blockdev nvme admin passthru ...[2024-12-09 14:39:55.865560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:17.785 [2024-12-09 14:39:55.865596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:17.785 passed 00:07:17.785 Test: blockdev copy ...passed 00:07:17.785 00:07:17.785 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.785 suites 6 6 n/a 0 0 00:07:17.785 tests 138 138 138 0 0 00:07:17.785 asserts 893 893 893 0 n/a 00:07:17.785 00:07:17.785 Elapsed time = 2.129 seconds 00:07:17.785 0 00:07:17.785 14:39:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61296 00:07:17.785 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61296 ']' 00:07:17.785 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61296 00:07:17.785 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:17.785 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.785 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61296 00:07:18.043 killing process with pid 61296 00:07:18.043 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.043 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.043 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61296' 00:07:18.043 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61296 00:07:18.043 14:39:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61296 00:07:21.324 14:39:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:21.324 00:07:21.324 real 0m5.091s 00:07:21.324 user 0m13.705s 00:07:21.324 sys 0m0.383s 00:07:21.324 14:39:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.324 14:39:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:21.324 ************************************ 00:07:21.324 END TEST bdev_bounds 00:07:21.324 ************************************ 00:07:21.324 14:39:58 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:21.324 14:39:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.324 14:39:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.324 14:39:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.324 ************************************ 00:07:21.324 START TEST bdev_nbd 00:07:21.324 ************************************ 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61363 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61363 /var/tmp/spdk-nbd.sock 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61363 ']' 00:07:21.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:21.324 14:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:21.324 [2024-12-09 14:39:59.054389] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:21.324 [2024-12-09 14:39:59.054509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.324 [2024-12-09 14:39:59.208628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.324 [2024-12-09 14:39:59.324735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:21.889 14:39:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.217 1+0 records in 00:07:22.217 1+0 records out 00:07:22.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461932 s, 8.9 MB/s 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:22.217 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.495 1+0 records in 00:07:22.495 1+0 records out 00:07:22.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394572 s, 10.4 MB/s 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:22.495 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.754 1+0 records in 00:07:22.754 1+0 records out 00:07:22.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373184 s, 11.0 MB/s 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:22.754 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.012 1+0 records in 00:07:23.012 1+0 records out 00:07:23.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510789 s, 8.0 MB/s 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:23.012 14:40:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:23.012 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.271 1+0 records in 00:07:23.271 1+0 records out 00:07:23.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592786 s, 6.9 MB/s 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.271 1+0 records in 00:07:23.271 1+0 records out 00:07:23.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464244 s, 8.8 MB/s 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:23.271 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd0", 00:07:23.529 "bdev_name": "Nvme0n1" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd1", 00:07:23.529 "bdev_name": "Nvme1n1" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd2", 00:07:23.529 "bdev_name": "Nvme2n1" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd3", 00:07:23.529 "bdev_name": "Nvme2n2" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd4", 00:07:23.529 "bdev_name": "Nvme2n3" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd5", 00:07:23.529 "bdev_name": "Nvme3n1" 00:07:23.529 } 00:07:23.529 ]' 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd0", 00:07:23.529 "bdev_name": "Nvme0n1" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd1", 00:07:23.529 "bdev_name": "Nvme1n1" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd2", 00:07:23.529 "bdev_name": "Nvme2n1" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd3", 00:07:23.529 "bdev_name": "Nvme2n2" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd4", 00:07:23.529 "bdev_name": "Nvme2n3" 00:07:23.529 }, 00:07:23.529 { 00:07:23.529 "nbd_device": "/dev/nbd5", 00:07:23.529 "bdev_name": "Nvme3n1" 00:07:23.529 } 00:07:23.529 ]' 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.529 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.787 14:40:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.045 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.303 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.561 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.819 14:40:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:25.078 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:25.336 /dev/nbd0 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.336 1+0 records in 00:07:25.336 1+0 records out 00:07:25.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363498 s, 11.3 MB/s 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:25.336 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:25.593 /dev/nbd1 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.593 1+0 records in 00:07:25.593 1+0 records out 00:07:25.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595199 s, 6.9 MB/s 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:25.593 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:25.851 /dev/nbd10 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.851 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.852 1+0 records in 00:07:25.852 1+0 records out 00:07:25.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411175 s, 10.0 MB/s 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:25.852 14:40:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:26.109 /dev/nbd11 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.109 1+0 records in 00:07:26.109 1+0 records out 00:07:26.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376804 s, 10.9 MB/s 00:07:26.109 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.110 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:26.110 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.110 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.110 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:26.110 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.110 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:26.110 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:26.367 /dev/nbd12 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.367 1+0 records in 00:07:26.367 1+0 records out 00:07:26.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498914 s, 8.2 MB/s 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.367 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:26.368 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:26.625 /dev/nbd13 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.625 1+0 records in 00:07:26.625 1+0 records out 00:07:26.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512946 s, 8.0 MB/s 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.625 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.882 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd0", 00:07:26.882 "bdev_name": "Nvme0n1" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd1", 00:07:26.882 "bdev_name": "Nvme1n1" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd10", 00:07:26.882 "bdev_name": "Nvme2n1" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd11", 00:07:26.882 "bdev_name": "Nvme2n2" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd12", 00:07:26.882 "bdev_name": "Nvme2n3" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd13", 00:07:26.882 "bdev_name": "Nvme3n1" 00:07:26.882 } 00:07:26.882 ]' 00:07:26.882 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd0", 00:07:26.882 "bdev_name": "Nvme0n1" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd1", 00:07:26.882 "bdev_name": "Nvme1n1" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd10", 00:07:26.882 "bdev_name": "Nvme2n1" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd11", 00:07:26.882 "bdev_name": "Nvme2n2" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd12", 00:07:26.882 "bdev_name": "Nvme2n3" 00:07:26.882 }, 00:07:26.882 { 00:07:26.882 "nbd_device": "/dev/nbd13", 00:07:26.882 "bdev_name": "Nvme3n1" 00:07:26.882 } 00:07:26.882 ]' 00:07:26.882 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.882 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.882 /dev/nbd1 00:07:26.882 /dev/nbd10 00:07:26.882 /dev/nbd11 00:07:26.882 /dev/nbd12 00:07:26.882 /dev/nbd13' 00:07:26.882 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.882 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.882 /dev/nbd1 00:07:26.882 /dev/nbd10 00:07:26.882 /dev/nbd11 00:07:26.882 /dev/nbd12 00:07:26.882 /dev/nbd13' 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:26.883 256+0 records in 00:07:26.883 256+0 records out 00:07:26.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100278 s, 105 MB/s 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.883 256+0 records in 00:07:26.883 256+0 records out 00:07:26.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0658647 s, 15.9 MB/s 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.883 256+0 records in 00:07:26.883 256+0 records out 00:07:26.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.066983 s, 15.7 MB/s 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.883 14:40:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:27.139 256+0 records in 00:07:27.139 256+0 records out 00:07:27.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644373 s, 16.3 MB/s 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:27.139 256+0 records in 00:07:27.139 256+0 records out 00:07:27.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631547 s, 16.6 MB/s 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:27.139 256+0 records in 00:07:27.139 256+0 records out 00:07:27.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0643629 s, 16.3 MB/s 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:27.139 256+0 records in 00:07:27.139 256+0 records out 00:07:27.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0618454 s, 17.0 MB/s 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.139 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.396 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.654 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.912 14:40:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.170 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:28.428 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:28.428 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:28.428 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:28.428 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.429 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:28.686 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:28.944 malloc_lvol_verify 00:07:28.944 14:40:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:29.201 583628b4-1b1e-4501-96bd-5123f5869584 00:07:29.201 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:29.459 be0e6ba7-b00f-49d1-b7df-946c48e55a01 00:07:29.459 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:29.459 /dev/nbd0 00:07:29.459 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:29.459 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:29.459 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:29.459 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:29.459 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:29.459 mke2fs 1.47.0 (5-Feb-2023) 00:07:29.718 Discarding device blocks: 0/4096 done 00:07:29.718 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:29.718 00:07:29.718 Allocating group tables: 0/1 done 00:07:29.718 Writing inode tables: 0/1 done 00:07:29.718 Creating journal (1024 blocks): done 00:07:29.718 Writing superblocks and filesystem accounting information: 0/1 done 00:07:29.718 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61363 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61363 ']' 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61363 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61363 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.718 killing process with pid 61363 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61363' 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61363 00:07:29.718 14:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61363 00:07:30.666 14:40:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:30.666 00:07:30.666 real 0m9.623s 00:07:30.666 user 0m13.894s 00:07:30.666 sys 0m3.001s 00:07:30.666 14:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.666 14:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:30.666 ************************************ 00:07:30.666 END TEST bdev_nbd 00:07:30.666 ************************************ 00:07:30.666 14:40:08 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:30.666 14:40:08 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:30.666 skipping fio tests on NVMe due to multi-ns failures. 00:07:30.667 14:40:08 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:30.667 14:40:08 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:30.667 14:40:08 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:30.667 14:40:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:30.667 14:40:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.667 14:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.667 ************************************ 00:07:30.667 START TEST bdev_verify 00:07:30.667 ************************************ 00:07:30.667 14:40:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:30.667 [2024-12-09 14:40:08.718865] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:30.667 [2024-12-09 14:40:08.718989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61744 ] 00:07:30.933 [2024-12-09 14:40:08.879187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.933 [2024-12-09 14:40:08.987628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.933 [2024-12-09 14:40:08.987693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.502 Running I/O for 5 seconds... 00:07:33.806 25984.00 IOPS, 101.50 MiB/s [2024-12-09T14:40:12.860Z] 26496.00 IOPS, 103.50 MiB/s [2024-12-09T14:40:13.793Z] 26048.00 IOPS, 101.75 MiB/s [2024-12-09T14:40:14.728Z] 25856.00 IOPS, 101.00 MiB/s [2024-12-09T14:40:14.728Z] 25868.80 IOPS, 101.05 MiB/s 00:07:36.606 Latency(us) 00:07:36.606 [2024-12-09T14:40:14.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.606 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x0 length 0xbd0bd 00:07:36.606 Nvme0n1 : 5.05 2231.25 8.72 0.00 0.00 57255.36 11393.18 61704.66 00:07:36.606 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:36.606 Nvme0n1 : 5.04 2055.23 8.03 0.00 0.00 62098.01 13510.50 70173.93 00:07:36.606 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x0 length 0xa0000 00:07:36.606 Nvme1n1 : 5.05 2229.98 8.71 0.00 0.00 57188.52 11443.59 54041.99 00:07:36.606 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0xa0000 length 0xa0000 00:07:36.606 Nvme1n1 : 5.05 2053.96 8.02 0.00 0.00 61972.58 15829.46 58074.98 00:07:36.606 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x0 length 0x80000 00:07:36.606 Nvme2n1 : 5.06 2227.76 8.70 0.00 0.00 57123.31 14216.27 49000.76 00:07:36.606 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x80000 length 0x80000 00:07:36.606 Nvme2n1 : 5.05 2052.74 8.02 0.00 0.00 61854.72 15627.82 52428.80 00:07:36.606 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x0 length 0x80000 00:07:36.606 Nvme2n2 : 5.06 2227.13 8.70 0.00 0.00 57033.43 13409.67 43354.58 00:07:36.606 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x80000 length 0x80000 00:07:36.606 Nvme2n2 : 5.07 2058.43 8.04 0.00 0.00 61541.87 5646.18 54445.29 00:07:36.606 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x0 length 0x80000 00:07:36.606 Nvme2n3 : 5.06 2226.52 8.70 0.00 0.00 56948.82 12855.14 45371.08 00:07:36.606 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x80000 length 0x80000 00:07:36.606 Nvme2n3 : 5.09 2064.06 8.06 0.00 0.00 61327.03 11443.59 58478.28 00:07:36.606 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x0 length 0x20000 00:07:36.606 Nvme3n1 : 5.06 2225.94 8.70 0.00 0.00 56857.44 7108.14 47387.57 00:07:36.606 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.606 Verification LBA range: start 0x20000 length 0x20000 00:07:36.606 Nvme3n1 : 5.09 2062.86 8.06 0.00 0.00 61288.42 8519.68 60898.07 00:07:36.606 [2024-12-09T14:40:14.728Z] =================================================================================================================== 00:07:36.606 [2024-12-09T14:40:14.728Z] Total : 25715.85 100.45 0.00 0.00 59283.70 5646.18 70173.93 00:07:39.166 00:07:39.166 real 0m8.095s 00:07:39.166 user 0m15.214s 00:07:39.166 sys 0m0.264s 00:07:39.166 14:40:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.166 14:40:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:39.166 ************************************ 00:07:39.166 END TEST bdev_verify 00:07:39.166 ************************************ 00:07:39.166 14:40:16 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:39.166 14:40:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:39.166 14:40:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.166 14:40:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.166 ************************************ 00:07:39.166 START TEST bdev_verify_big_io 00:07:39.166 ************************************ 00:07:39.166 14:40:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:39.166 [2024-12-09 14:40:16.849903] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:39.166 [2024-12-09 14:40:16.850013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61841 ] 00:07:39.166 [2024-12-09 14:40:17.008765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.166 [2024-12-09 14:40:17.127519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.166 [2024-12-09 14:40:17.127620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.733 Running I/O for 5 seconds... 00:07:45.285 1331.00 IOPS, 83.19 MiB/s [2024-12-09T14:40:23.973Z] 2305.50 IOPS, 144.09 MiB/s [2024-12-09T14:40:24.232Z] 2716.33 IOPS, 169.77 MiB/s 00:07:46.110 Latency(us) 00:07:46.110 [2024-12-09T14:40:24.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.110 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x0 length 0xbd0b 00:07:46.110 Nvme0n1 : 5.59 137.47 8.59 0.00 0.00 894143.02 17543.48 1006632.96 00:07:46.110 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:46.110 Nvme0n1 : 5.90 108.39 6.77 0.00 0.00 1140339.77 9074.22 1529307.77 00:07:46.110 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x0 length 0xa000 00:07:46.110 Nvme1n1 : 5.74 138.17 8.64 0.00 0.00 859696.02 106470.79 838860.80 00:07:46.110 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0xa000 length 0xa000 00:07:46.110 Nvme1n1 : 5.90 105.32 6.58 0.00 0.00 1102539.09 59688.17 1238932.87 00:07:46.110 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x0 length 0x8000 00:07:46.110 Nvme2n1 : 5.80 143.48 8.97 0.00 0.00 814913.01 56058.49 751748.33 00:07:46.110 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x8000 length 0x8000 00:07:46.110 Nvme2n1 : 5.90 108.51 6.78 0.00 0.00 1021014.65 87919.06 1064707.94 00:07:46.110 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x0 length 0x8000 00:07:46.110 Nvme2n2 : 5.88 148.85 9.30 0.00 0.00 765013.22 22383.06 754974.72 00:07:46.110 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x8000 length 0x8000 00:07:46.110 Nvme2n2 : 5.99 125.43 7.84 0.00 0.00 848584.60 14216.27 1084066.26 00:07:46.110 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x0 length 0x8000 00:07:46.110 Nvme2n3 : 5.89 152.76 9.55 0.00 0.00 726693.58 54848.59 877577.45 00:07:46.110 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x8000 length 0x8000 00:07:46.110 Nvme2n3 : 6.10 164.84 10.30 0.00 0.00 625358.31 6427.57 1103424.59 00:07:46.110 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x0 length 0x2000 00:07:46.110 Nvme3n1 : 5.89 163.00 10.19 0.00 0.00 662721.98 1235.10 884030.23 00:07:46.110 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.110 Verification LBA range: start 0x2000 length 0x2000 00:07:46.110 Nvme3n1 : 6.32 300.57 18.79 0.00 0.00 329380.81 159.11 1116330.14 00:07:46.110 [2024-12-09T14:40:24.232Z] =================================================================================================================== 00:07:46.110 [2024-12-09T14:40:24.232Z] Total : 1796.79 112.30 0.00 0.00 745487.27 159.11 1529307.77 00:07:49.389 00:07:49.389 real 0m10.584s 00:07:49.389 user 0m20.139s 00:07:49.390 sys 0m0.286s 00:07:49.390 14:40:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.390 14:40:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:49.390 ************************************ 00:07:49.390 END TEST bdev_verify_big_io 00:07:49.390 ************************************ 00:07:49.390 14:40:27 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.390 14:40:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:49.390 14:40:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.390 14:40:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.390 ************************************ 00:07:49.390 START TEST bdev_write_zeroes 00:07:49.390 ************************************ 00:07:49.390 14:40:27 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.390 [2024-12-09 14:40:27.483520] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:49.390 [2024-12-09 14:40:27.483653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:07:49.671 [2024-12-09 14:40:27.643380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.671 [2024-12-09 14:40:27.758340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.286 Running I/O for 1 seconds... 00:07:51.658 74112.00 IOPS, 289.50 MiB/s 00:07:51.658 Latency(us) 00:07:51.658 [2024-12-09T14:40:29.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.658 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:51.658 Nvme0n1 : 1.02 12270.48 47.93 0.00 0.00 10406.49 8620.50 21475.64 00:07:51.658 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:51.658 Nvme1n1 : 1.02 12256.60 47.88 0.00 0.00 10380.26 8620.50 19963.27 00:07:51.658 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:51.658 Nvme2n1 : 1.02 12242.70 47.82 0.00 0.00 10352.88 8570.09 18955.03 00:07:51.658 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:51.658 Nvme2n2 : 1.03 12228.64 47.77 0.00 0.00 10330.41 8620.50 18450.90 00:07:51.658 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:51.658 Nvme2n3 : 1.03 12214.83 47.71 0.00 0.00 10307.52 5494.94 18854.20 00:07:51.658 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:51.658 Nvme3n1 : 1.03 12201.10 47.66 0.00 0.00 10300.44 4864.79 20467.40 00:07:51.658 [2024-12-09T14:40:29.780Z] =================================================================================================================== 00:07:51.658 [2024-12-09T14:40:29.780Z] Total : 73414.34 286.77 0.00 0.00 10346.33 4864.79 21475.64 00:07:52.224 00:07:52.224 real 0m2.758s 00:07:52.224 user 0m2.445s 00:07:52.224 sys 0m0.199s 00:07:52.224 14:40:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.224 ************************************ 00:07:52.224 END TEST bdev_write_zeroes 00:07:52.224 ************************************ 00:07:52.224 14:40:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:52.224 14:40:30 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:52.224 14:40:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:52.224 14:40:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.224 14:40:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.224 ************************************ 00:07:52.224 START TEST bdev_json_nonenclosed 00:07:52.224 ************************************ 00:07:52.224 14:40:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:52.224 [2024-12-09 14:40:30.299361] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:52.224 [2024-12-09 14:40:30.299513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:07:52.484 [2024-12-09 14:40:30.464037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.484 [2024-12-09 14:40:30.586662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.484 [2024-12-09 14:40:30.586770] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:52.484 [2024-12-09 14:40:30.586791] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:52.484 [2024-12-09 14:40:30.586814] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.745 00:07:52.745 real 0m0.551s 00:07:52.745 user 0m0.345s 00:07:52.745 sys 0m0.099s 00:07:52.745 14:40:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.745 ************************************ 00:07:52.745 14:40:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:52.745 END TEST bdev_json_nonenclosed 00:07:52.745 ************************************ 00:07:52.745 14:40:30 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:52.745 14:40:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:52.745 14:40:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.745 14:40:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.745 ************************************ 00:07:52.745 START TEST bdev_json_nonarray 00:07:52.745 ************************************ 00:07:52.745 14:40:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:53.004 [2024-12-09 14:40:30.881809] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:53.004 [2024-12-09 14:40:30.881931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62045 ] 00:07:53.004 [2024-12-09 14:40:31.043559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.264 [2024-12-09 14:40:31.158969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.264 [2024-12-09 14:40:31.159073] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:53.264 [2024-12-09 14:40:31.159091] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:53.264 [2024-12-09 14:40:31.159102] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.264 00:07:53.264 real 0m0.528s 00:07:53.264 user 0m0.326s 00:07:53.264 sys 0m0.098s 00:07:53.264 14:40:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.264 14:40:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:53.264 ************************************ 00:07:53.264 END TEST bdev_json_nonarray 00:07:53.264 ************************************ 00:07:53.264 14:40:31 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:53.264 14:40:31 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:53.264 14:40:31 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:53.264 14:40:31 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:53.264 14:40:31 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:53.264 14:40:31 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:53.523 14:40:31 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:53.523 14:40:31 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:53.523 14:40:31 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:53.523 14:40:31 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:53.523 14:40:31 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:53.523 00:07:53.523 real 0m42.513s 00:07:53.523 user 1m10.776s 00:07:53.523 sys 0m5.271s 00:07:53.523 14:40:31 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.523 14:40:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.523 ************************************ 00:07:53.523 END TEST blockdev_nvme 00:07:53.523 ************************************ 00:07:53.523 14:40:31 -- spdk/autotest.sh@209 -- # uname -s 00:07:53.523 14:40:31 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:53.523 14:40:31 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:53.523 14:40:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.523 14:40:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.523 14:40:31 -- common/autotest_common.sh@10 -- # set +x 00:07:53.523 ************************************ 00:07:53.523 START TEST blockdev_nvme_gpt 00:07:53.523 ************************************ 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:53.523 * Looking for test storage... 00:07:53.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.523 14:40:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.523 --rc genhtml_branch_coverage=1 00:07:53.523 --rc genhtml_function_coverage=1 00:07:53.523 --rc genhtml_legend=1 00:07:53.523 --rc geninfo_all_blocks=1 00:07:53.523 --rc geninfo_unexecuted_blocks=1 00:07:53.523 00:07:53.523 ' 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.523 --rc genhtml_branch_coverage=1 00:07:53.523 --rc genhtml_function_coverage=1 00:07:53.523 --rc genhtml_legend=1 00:07:53.523 --rc geninfo_all_blocks=1 00:07:53.523 --rc geninfo_unexecuted_blocks=1 00:07:53.523 00:07:53.523 ' 00:07:53.523 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.523 --rc genhtml_branch_coverage=1 00:07:53.523 --rc genhtml_function_coverage=1 00:07:53.524 --rc genhtml_legend=1 00:07:53.524 --rc geninfo_all_blocks=1 00:07:53.524 --rc geninfo_unexecuted_blocks=1 00:07:53.524 00:07:53.524 ' 00:07:53.524 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.524 --rc genhtml_branch_coverage=1 00:07:53.524 --rc genhtml_function_coverage=1 00:07:53.524 --rc genhtml_legend=1 00:07:53.524 --rc geninfo_all_blocks=1 00:07:53.524 --rc geninfo_unexecuted_blocks=1 00:07:53.524 00:07:53.524 ' 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62118 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62118 00:07:53.524 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62118 ']' 00:07:53.524 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.524 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.524 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.524 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.524 14:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:53.524 14:40:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.781 [2024-12-09 14:40:31.655051] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:07:53.781 [2024-12-09 14:40:31.655147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:07:53.781 [2024-12-09 14:40:31.808334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.041 [2024-12-09 14:40:31.924723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.611 14:40:32 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.611 14:40:32 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:54.611 14:40:32 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:54.611 14:40:32 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:54.611 14:40:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:54.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.130 Waiting for block devices as requested 00:07:55.130 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:55.130 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:55.130 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:55.130 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:00.488 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:00.488 BYT; 00:08:00.488 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:00.488 BYT; 00:08:00.488 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:00.488 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:00.488 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:00.489 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:00.489 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:00.489 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:00.489 14:40:38 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:00.489 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:00.489 14:40:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:01.422 The operation has completed successfully. 00:08:01.422 14:40:39 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:02.355 The operation has completed successfully. 00:08:02.355 14:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:02.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:03.178 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:03.178 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:03.178 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:03.436 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:03.436 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:03.436 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.436 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.436 [] 00:08:03.436 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.436 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:03.436 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:03.436 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:03.436 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:03.436 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:03.436 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.436 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:03.694 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.694 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.953 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.953 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:03.953 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:03.954 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "22137655-b882-4047-baa5-d540b569ea13"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "22137655-b882-4047-baa5-d540b569ea13",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "596e615e-7c7e-432c-983a-75e9541f1f35"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "596e615e-7c7e-432c-983a-75e9541f1f35",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ae0d657e-0498-410c-b622-06b1e5f8e9b8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae0d657e-0498-410c-b622-06b1e5f8e9b8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b4b79170-7b0f-4b33-aaab-4c2272f50ec5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b4b79170-7b0f-4b33-aaab-4c2272f50ec5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "deb26db6-60f2-4f50-8540-04285b9ced61"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "deb26db6-60f2-4f50-8540-04285b9ced61",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:03.954 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:03.954 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:03.954 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:03.954 14:40:41 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62118 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62118 ']' 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62118 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62118 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.954 killing process with pid 62118 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62118' 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62118 00:08:03.954 14:40:41 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62118 00:08:05.327 14:40:43 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:05.327 14:40:43 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:05.327 14:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:05.328 14:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.328 14:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:05.328 ************************************ 00:08:05.328 START TEST bdev_hello_world 00:08:05.328 ************************************ 00:08:05.328 14:40:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:05.328 [2024-12-09 14:40:43.247029] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:05.328 [2024-12-09 14:40:43.247158] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62737 ] 00:08:05.328 [2024-12-09 14:40:43.405461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.586 [2024-12-09 14:40:43.515125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.152 [2024-12-09 14:40:44.081954] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:06.152 [2024-12-09 14:40:44.081998] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:06.152 [2024-12-09 14:40:44.082019] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:06.152 [2024-12-09 14:40:44.084501] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:06.152 [2024-12-09 14:40:44.084954] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:06.152 [2024-12-09 14:40:44.084979] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:06.152 [2024-12-09 14:40:44.085135] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:06.152 00:08:06.152 [2024-12-09 14:40:44.085160] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:07.085 00:08:07.085 real 0m1.661s 00:08:07.085 user 0m1.353s 00:08:07.085 sys 0m0.201s 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:07.085 ************************************ 00:08:07.085 END TEST bdev_hello_world 00:08:07.085 ************************************ 00:08:07.085 14:40:44 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:07.085 14:40:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.085 14:40:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.085 14:40:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:07.085 ************************************ 00:08:07.085 START TEST bdev_bounds 00:08:07.085 ************************************ 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62773 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:07.085 Process bdevio pid: 62773 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62773' 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62773 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62773 ']' 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.085 14:40:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:07.085 [2024-12-09 14:40:44.947031] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:07.085 [2024-12-09 14:40:44.947150] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62773 ] 00:08:07.085 [2024-12-09 14:40:45.107862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.342 [2024-12-09 14:40:45.209675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.342 [2024-12-09 14:40:45.209777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.342 [2024-12-09 14:40:45.209832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.907 14:40:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.907 14:40:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:07.907 14:40:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:07.907 I/O targets: 00:08:07.907 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:07.907 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:07.907 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:07.907 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:07.907 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:07.907 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:07.907 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:07.907 00:08:07.907 00:08:07.907 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.907 http://cunit.sourceforge.net/ 00:08:07.907 00:08:07.907 00:08:07.907 Suite: bdevio tests on: Nvme3n1 00:08:07.907 Test: blockdev write read block ...passed 00:08:07.907 Test: blockdev write zeroes read block ...passed 00:08:07.907 Test: blockdev write zeroes read no split ...passed 00:08:07.907 Test: blockdev write zeroes read split ...passed 00:08:07.907 Test: blockdev write zeroes read split partial ...passed 00:08:07.907 Test: blockdev reset ...[2024-12-09 14:40:45.946738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:07.907 passed 00:08:07.907 Test: blockdev write read 8 blocks ...[2024-12-09 14:40:45.949712] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:07.907 passed 00:08:07.907 Test: blockdev write read size > 128k ...passed 00:08:07.907 Test: blockdev write read invalid size ...passed 00:08:07.907 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:07.907 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:07.907 Test: blockdev write read max offset ...passed 00:08:07.907 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:07.907 Test: blockdev writev readv 8 blocks ...passed 00:08:07.907 Test: blockdev writev readv 30 x 1block ...passed 00:08:07.907 Test: blockdev writev readv block ...passed 00:08:07.907 Test: blockdev writev readv size > 128k ...passed 00:08:07.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:07.907 Test: blockdev comparev and writev ...[2024-12-09 14:40:45.956586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aa604000 len:0x1000 00:08:07.907 [2024-12-09 14:40:45.956641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:07.907 passed 00:08:07.907 Test: blockdev nvme passthru rw ...passed 00:08:07.907 Test: blockdev nvme passthru vendor specific ...passed 00:08:07.907 Test: blockdev nvme admin passthru ...[2024-12-09 14:40:45.957342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:07.908 [2024-12-09 14:40:45.957372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:07.908 passed 00:08:07.908 Test: blockdev copy ...passed 00:08:07.908 Suite: bdevio tests on: Nvme2n3 00:08:07.908 Test: blockdev write read block ...passed 00:08:07.908 Test: blockdev write zeroes read block ...passed 00:08:07.908 Test: blockdev write zeroes read no split ...passed 00:08:07.908 Test: blockdev write zeroes read split ...passed 00:08:07.908 Test: blockdev write zeroes read split partial ...passed 00:08:07.908 Test: blockdev reset ...[2024-12-09 14:40:46.014421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:07.908 passed 00:08:07.908 Test: blockdev write read 8 blocks ...[2024-12-09 14:40:46.017915] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:07.908 passed 00:08:07.908 Test: blockdev write read size > 128k ...passed 00:08:07.908 Test: blockdev write read invalid size ...passed 00:08:07.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:07.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:07.908 Test: blockdev write read max offset ...passed 00:08:07.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:07.908 Test: blockdev writev readv 8 blocks ...passed 00:08:07.908 Test: blockdev writev readv 30 x 1block ...passed 00:08:07.908 Test: blockdev writev readv block ...passed 00:08:07.908 Test: blockdev writev readv size > 128k ...passed 00:08:07.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:07.908 Test: blockdev comparev and writev ...[2024-12-09 14:40:46.024753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aa602000 len:0x1000 00:08:07.908 [2024-12-09 14:40:46.024812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:07.908 passed 00:08:07.908 Test: blockdev nvme passthru rw ...passed 00:08:07.908 Test: blockdev nvme passthru vendor specific ...passed 00:08:07.908 Test: blockdev nvme admin passthru ...[2024-12-09 14:40:46.025665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:07.908 [2024-12-09 14:40:46.025698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:08.166 passed 00:08:08.166 Test: blockdev copy ...passed 00:08:08.166 Suite: bdevio tests on: Nvme2n2 00:08:08.166 Test: blockdev write read block ...passed 00:08:08.166 Test: blockdev write zeroes read block ...passed 00:08:08.166 Test: blockdev write zeroes read no split ...passed 00:08:08.166 Test: blockdev write zeroes read split ...passed 00:08:08.166 Test: blockdev write zeroes read split partial ...passed 00:08:08.166 Test: blockdev reset ...[2024-12-09 14:40:46.082783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:08.166 [2024-12-09 14:40:46.086078] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:08.166 passed 00:08:08.166 Test: blockdev write read 8 blocks ...passed 00:08:08.166 Test: blockdev write read size > 128k ...passed 00:08:08.166 Test: blockdev write read invalid size ...passed 00:08:08.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:08.166 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:08.166 Test: blockdev write read max offset ...passed 00:08:08.166 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:08.166 Test: blockdev writev readv 8 blocks ...passed 00:08:08.166 Test: blockdev writev readv 30 x 1block ...passed 00:08:08.166 Test: blockdev writev readv block ...passed 00:08:08.166 Test: blockdev writev readv size > 128k ...passed 00:08:08.166 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:08.166 Test: blockdev comparev and writev ...[2024-12-09 14:40:46.094050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:08:08.166 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2e4e38000 len:0x1000 00:08:08.166 [2024-12-09 14:40:46.094419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:08.166 passed 00:08:08.166 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:40:46.095284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:08.166 passed 00:08:08.166 Test: blockdev nvme admin passthru ...[2024-12-09 14:40:46.095353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:08.166 passed 00:08:08.166 Test: blockdev copy ...passed 00:08:08.166 Suite: bdevio tests on: Nvme2n1 00:08:08.166 Test: blockdev write read block ...passed 00:08:08.166 Test: blockdev write zeroes read block ...passed 00:08:08.166 Test: blockdev write zeroes read no split ...passed 00:08:08.166 Test: blockdev write zeroes read split ...passed 00:08:08.166 Test: blockdev write zeroes read split partial ...passed 00:08:08.166 Test: blockdev reset ...[2024-12-09 14:40:46.155236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:08.166 [2024-12-09 14:40:46.158284] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:08.166 passed 00:08:08.166 Test: blockdev write read 8 blocks ...passed 00:08:08.166 Test: blockdev write read size > 128k ...passed 00:08:08.166 Test: blockdev write read invalid size ...passed 00:08:08.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:08.166 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:08.166 Test: blockdev write read max offset ...passed 00:08:08.166 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:08.166 Test: blockdev writev readv 8 blocks ...passed 00:08:08.166 Test: blockdev writev readv 30 x 1block ...passed 00:08:08.166 Test: blockdev writev readv block ...passed 00:08:08.166 Test: blockdev writev readv size > 128k ...passed 00:08:08.166 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:08.166 Test: blockdev comparev and writev ...[2024-12-09 14:40:46.167370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4e34000 len:0x1000 00:08:08.166 [2024-12-09 14:40:46.167826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:08.166 passed 00:08:08.166 Test: blockdev nvme passthru rw ...passed 00:08:08.166 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:40:46.169157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:08.166 [2024-12-09 14:40:46.169398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:08.166 passed 00:08:08.166 Test: blockdev nvme admin passthru ...passed 00:08:08.166 Test: blockdev copy ...passed 00:08:08.166 Suite: bdevio tests on: Nvme1n1p2 00:08:08.166 Test: blockdev write read block ...passed 00:08:08.166 Test: blockdev write zeroes read block ...passed 00:08:08.166 Test: blockdev write zeroes read no split ...passed 00:08:08.166 Test: blockdev write zeroes read split ...passed 00:08:08.166 Test: blockdev write zeroes read split partial ...passed 00:08:08.166 Test: blockdev reset ...[2024-12-09 14:40:46.239425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:08.166 [2024-12-09 14:40:46.242274] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:08.166 passed 00:08:08.166 Test: blockdev write read 8 blocks ...passed 00:08:08.166 Test: blockdev write read size > 128k ...passed 00:08:08.166 Test: blockdev write read invalid size ...passed 00:08:08.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:08.166 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:08.166 Test: blockdev write read max offset ...passed 00:08:08.166 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:08.166 Test: blockdev writev readv 8 blocks ...passed 00:08:08.166 Test: blockdev writev readv 30 x 1block ...passed 00:08:08.166 Test: blockdev writev readv block ...passed 00:08:08.166 Test: blockdev writev readv size > 128k ...passed 00:08:08.166 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:08.166 Test: blockdev comparev and writev ...[2024-12-09 14:40:46.251037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e4e30000 len:0x1000 00:08:08.166 [2024-12-09 14:40:46.251094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:08.166 passed 00:08:08.166 Test: blockdev nvme passthru rw ...passed 00:08:08.166 Test: blockdev nvme passthru vendor specific ...passed 00:08:08.166 Test: blockdev nvme admin passthru ...passed 00:08:08.166 Test: blockdev copy ...passed 00:08:08.166 Suite: bdevio tests on: Nvme1n1p1 00:08:08.166 Test: blockdev write read block ...passed 00:08:08.166 Test: blockdev write zeroes read block ...passed 00:08:08.166 Test: blockdev write zeroes read no split ...passed 00:08:08.166 Test: blockdev write zeroes read split ...passed 00:08:08.424 Test: blockdev write zeroes read split partial ...passed 00:08:08.424 Test: blockdev reset ...[2024-12-09 14:40:46.292689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:08.424 passed 00:08:08.424 Test: blockdev write read 8 blocks ...[2024-12-09 14:40:46.295436] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:08.424 passed 00:08:08.424 Test: blockdev write read size > 128k ...passed 00:08:08.424 Test: blockdev write read invalid size ...passed 00:08:08.424 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:08.424 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:08.424 Test: blockdev write read max offset ...passed 00:08:08.424 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:08.424 Test: blockdev writev readv 8 blocks ...passed 00:08:08.424 Test: blockdev writev readv 30 x 1block ...passed 00:08:08.424 Test: blockdev writev readv block ...passed 00:08:08.424 Test: blockdev writev readv size > 128k ...passed 00:08:08.424 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:08.424 Test: blockdev comparev and writev ...[2024-12-09 14:40:46.302046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2aa80e000 len:0x1000 00:08:08.424 [2024-12-09 14:40:46.302092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:08.424 passed 00:08:08.424 Test: blockdev nvme passthru rw ...passed 00:08:08.424 Test: blockdev nvme passthru vendor specific ...passed 00:08:08.424 Test: blockdev nvme admin passthru ...passed 00:08:08.424 Test: blockdev copy ...passed 00:08:08.424 Suite: bdevio tests on: Nvme0n1 00:08:08.424 Test: blockdev write read block ...passed 00:08:08.424 Test: blockdev write zeroes read block ...passed 00:08:08.424 Test: blockdev write zeroes read no split ...passed 00:08:08.424 Test: blockdev write zeroes read split ...passed 00:08:08.424 Test: blockdev write zeroes read split partial ...passed 00:08:08.424 Test: blockdev reset ...[2024-12-09 14:40:46.343402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:08.424 [2024-12-09 14:40:46.346148] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:08.424 passed 00:08:08.424 Test: blockdev write read 8 blocks ...passed 00:08:08.424 Test: blockdev write read size > 128k ...passed 00:08:08.424 Test: blockdev write read invalid size ...passed 00:08:08.424 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:08.424 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:08.424 Test: blockdev write read max offset ...passed 00:08:08.424 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:08.424 Test: blockdev writev readv 8 blocks ...passed 00:08:08.424 Test: blockdev writev readv 30 x 1block ...passed 00:08:08.424 Test: blockdev writev readv block ...passed 00:08:08.424 Test: blockdev writev readv size > 128k ...passed 00:08:08.424 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:08.424 Test: blockdev comparev and writev ...[2024-12-09 14:40:46.352738] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:08.424 separate metadata which is not supported yet. 00:08:08.424 passed 00:08:08.424 Test: blockdev nvme passthru rw ...passed 00:08:08.424 Test: blockdev nvme passthru vendor specific ...[2024-12-09 14:40:46.353392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:08.424 [2024-12-09 14:40:46.353539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:08:08.424 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:08:08.424 passed 00:08:08.424 Test: blockdev copy ...passed 00:08:08.424 00:08:08.424 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.424 suites 7 7 n/a 0 0 00:08:08.424 tests 161 161 161 0 0 00:08:08.424 asserts 1025 1025 1025 0 n/a 00:08:08.424 00:08:08.424 Elapsed time = 1.219 seconds 00:08:08.424 0 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62773 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62773 ']' 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62773 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62773 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62773' 00:08:08.424 killing process with pid 62773 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62773 00:08:08.424 14:40:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62773 00:08:09.017 14:40:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:09.017 00:08:09.017 real 0m2.241s 00:08:09.017 user 0m5.730s 00:08:09.017 sys 0m0.276s 00:08:09.017 14:40:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.017 14:40:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:09.017 ************************************ 00:08:09.017 END TEST bdev_bounds 00:08:09.017 ************************************ 00:08:09.277 14:40:47 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:09.277 14:40:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.277 14:40:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.277 14:40:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.277 ************************************ 00:08:09.277 START TEST bdev_nbd 00:08:09.277 ************************************ 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:09.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62833 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62833 /var/tmp/spdk-nbd.sock 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62833 ']' 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:09.277 14:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:09.277 [2024-12-09 14:40:47.240708] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:09.277 [2024-12-09 14:40:47.241000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.535 [2024-12-09 14:40:47.403652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.535 [2024-12-09 14:40:47.513278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:10.101 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.359 1+0 records in 00:08:10.359 1+0 records out 00:08:10.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508737 s, 8.1 MB/s 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:10.359 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.617 1+0 records in 00:08:10.617 1+0 records out 00:08:10.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556921 s, 7.4 MB/s 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.617 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.618 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.618 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.618 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:10.618 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:10.618 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.875 1+0 records in 00:08:10.875 1+0 records out 00:08:10.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005328 s, 7.7 MB/s 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:10.875 14:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:11.133 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.133 1+0 records in 00:08:11.133 1+0 records out 00:08:11.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693741 s, 5.9 MB/s 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:11.134 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.392 1+0 records in 00:08:11.392 1+0 records out 00:08:11.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550357 s, 7.4 MB/s 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.392 1+0 records in 00:08:11.392 1+0 records out 00:08:11.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376001 s, 10.9 MB/s 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:11.392 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:11.650 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.651 1+0 records in 00:08:11.651 1+0 records out 00:08:11.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609433 s, 6.7 MB/s 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:11.651 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.908 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:11.908 { 00:08:11.908 "nbd_device": "/dev/nbd0", 00:08:11.908 "bdev_name": "Nvme0n1" 00:08:11.908 }, 00:08:11.908 { 00:08:11.908 "nbd_device": "/dev/nbd1", 00:08:11.908 "bdev_name": "Nvme1n1p1" 00:08:11.908 }, 00:08:11.908 { 00:08:11.908 "nbd_device": "/dev/nbd2", 00:08:11.908 "bdev_name": "Nvme1n1p2" 00:08:11.908 }, 00:08:11.908 { 00:08:11.908 "nbd_device": "/dev/nbd3", 00:08:11.908 "bdev_name": "Nvme2n1" 00:08:11.908 }, 00:08:11.908 { 00:08:11.908 "nbd_device": "/dev/nbd4", 00:08:11.908 "bdev_name": "Nvme2n2" 00:08:11.908 }, 00:08:11.908 { 00:08:11.908 "nbd_device": "/dev/nbd5", 00:08:11.909 "bdev_name": "Nvme2n3" 00:08:11.909 }, 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd6", 00:08:11.909 "bdev_name": "Nvme3n1" 00:08:11.909 } 00:08:11.909 ]' 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd0", 00:08:11.909 "bdev_name": "Nvme0n1" 00:08:11.909 }, 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd1", 00:08:11.909 "bdev_name": "Nvme1n1p1" 00:08:11.909 }, 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd2", 00:08:11.909 "bdev_name": "Nvme1n1p2" 00:08:11.909 }, 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd3", 00:08:11.909 "bdev_name": "Nvme2n1" 00:08:11.909 }, 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd4", 00:08:11.909 "bdev_name": "Nvme2n2" 00:08:11.909 }, 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd5", 00:08:11.909 "bdev_name": "Nvme2n3" 00:08:11.909 }, 00:08:11.909 { 00:08:11.909 "nbd_device": "/dev/nbd6", 00:08:11.909 "bdev_name": "Nvme3n1" 00:08:11.909 } 00:08:11.909 ]' 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.909 14:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.166 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.424 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.682 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.940 14:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.198 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.455 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:13.712 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:13.713 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:13.971 /dev/nbd0 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.971 14:40:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.971 1+0 records in 00:08:13.971 1+0 records out 00:08:13.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351267 s, 11.7 MB/s 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:13.971 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:14.228 /dev/nbd1 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:14.228 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.228 1+0 records in 00:08:14.229 1+0 records out 00:08:14.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003712 s, 11.0 MB/s 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:14.229 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:14.487 /dev/nbd10 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.487 1+0 records in 00:08:14.487 1+0 records out 00:08:14.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478132 s, 8.6 MB/s 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:14.487 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:14.764 /dev/nbd11 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.764 1+0 records in 00:08:14.764 1+0 records out 00:08:14.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00299137 s, 1.4 MB/s 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:14.764 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:15.022 /dev/nbd12 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.022 1+0 records in 00:08:15.022 1+0 records out 00:08:15.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710369 s, 5.8 MB/s 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:15.022 14:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:15.280 /dev/nbd13 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.280 1+0 records in 00:08:15.280 1+0 records out 00:08:15.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109637 s, 3.7 MB/s 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:15.280 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:15.280 /dev/nbd14 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.538 1+0 records in 00:08:15.538 1+0 records out 00:08:15.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747525 s, 5.5 MB/s 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:15.538 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.539 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.539 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd0", 00:08:15.539 "bdev_name": "Nvme0n1" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd1", 00:08:15.539 "bdev_name": "Nvme1n1p1" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd10", 00:08:15.539 "bdev_name": "Nvme1n1p2" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd11", 00:08:15.539 "bdev_name": "Nvme2n1" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd12", 00:08:15.539 "bdev_name": "Nvme2n2" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd13", 00:08:15.539 "bdev_name": "Nvme2n3" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd14", 00:08:15.539 "bdev_name": "Nvme3n1" 00:08:15.539 } 00:08:15.539 ]' 00:08:15.539 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd0", 00:08:15.539 "bdev_name": "Nvme0n1" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd1", 00:08:15.539 "bdev_name": "Nvme1n1p1" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd10", 00:08:15.539 "bdev_name": "Nvme1n1p2" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd11", 00:08:15.539 "bdev_name": "Nvme2n1" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd12", 00:08:15.539 "bdev_name": "Nvme2n2" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd13", 00:08:15.539 "bdev_name": "Nvme2n3" 00:08:15.539 }, 00:08:15.539 { 00:08:15.539 "nbd_device": "/dev/nbd14", 00:08:15.539 "bdev_name": "Nvme3n1" 00:08:15.539 } 00:08:15.539 ]' 00:08:15.539 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:15.797 /dev/nbd1 00:08:15.797 /dev/nbd10 00:08:15.797 /dev/nbd11 00:08:15.797 /dev/nbd12 00:08:15.797 /dev/nbd13 00:08:15.797 /dev/nbd14' 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:15.797 /dev/nbd1 00:08:15.797 /dev/nbd10 00:08:15.797 /dev/nbd11 00:08:15.797 /dev/nbd12 00:08:15.797 /dev/nbd13 00:08:15.797 /dev/nbd14' 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:15.797 256+0 records in 00:08:15.797 256+0 records out 00:08:15.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516684 s, 203 MB/s 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:15.797 256+0 records in 00:08:15.797 256+0 records out 00:08:15.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113101 s, 9.3 MB/s 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.797 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:16.055 256+0 records in 00:08:16.055 256+0 records out 00:08:16.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119632 s, 8.8 MB/s 00:08:16.055 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.055 14:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:16.055 256+0 records in 00:08:16.055 256+0 records out 00:08:16.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0934015 s, 11.2 MB/s 00:08:16.055 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.055 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:16.055 256+0 records in 00:08:16.055 256+0 records out 00:08:16.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103339 s, 10.1 MB/s 00:08:16.055 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.055 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:16.313 256+0 records in 00:08:16.313 256+0 records out 00:08:16.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0797322 s, 13.2 MB/s 00:08:16.313 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.313 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:16.313 256+0 records in 00:08:16.313 256+0 records out 00:08:16.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146013 s, 7.2 MB/s 00:08:16.313 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.313 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:16.572 256+0 records in 00:08:16.572 256+0 records out 00:08:16.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0943948 s, 11.1 MB/s 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.572 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.830 14:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.088 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.089 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.089 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.345 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.602 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.859 14:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.117 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:18.374 malloc_lvol_verify 00:08:18.374 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:18.632 311603f8-3863-4603-abd0-e61ff079f6bd 00:08:18.632 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:18.890 46caf258-0072-4a24-b5ed-83d4e71eee71 00:08:18.890 14:40:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:19.148 /dev/nbd0 00:08:19.148 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:19.149 mke2fs 1.47.0 (5-Feb-2023) 00:08:19.149 Discarding device blocks: 0/4096 done 00:08:19.149 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:19.149 00:08:19.149 Allocating group tables: 0/1 done 00:08:19.149 Writing inode tables: 0/1 done 00:08:19.149 Creating journal (1024 blocks): done 00:08:19.149 Writing superblocks and filesystem accounting information: 0/1 done 00:08:19.149 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.149 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.406 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62833 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62833 ']' 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62833 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62833 00:08:19.407 killing process with pid 62833 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62833' 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62833 00:08:19.407 14:40:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62833 00:08:20.340 14:40:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:20.340 00:08:20.340 real 0m10.983s 00:08:20.340 user 0m15.518s 00:08:20.340 sys 0m3.554s 00:08:20.340 14:40:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.340 14:40:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:20.340 ************************************ 00:08:20.340 END TEST bdev_nbd 00:08:20.340 ************************************ 00:08:20.340 14:40:58 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:20.340 14:40:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:20.340 skipping fio tests on NVMe due to multi-ns failures. 00:08:20.340 14:40:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:20.340 14:40:58 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:20.340 14:40:58 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:20.340 14:40:58 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:20.340 14:40:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:20.340 14:40:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.340 14:40:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.340 ************************************ 00:08:20.340 START TEST bdev_verify 00:08:20.340 ************************************ 00:08:20.340 14:40:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:20.340 [2024-12-09 14:40:58.275428] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:20.340 [2024-12-09 14:40:58.275543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63243 ] 00:08:20.340 [2024-12-09 14:40:58.436436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.599 [2024-12-09 14:40:58.536020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.599 [2024-12-09 14:40:58.536181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.164 Running I/O for 5 seconds... 00:08:23.470 21824.00 IOPS, 85.25 MiB/s [2024-12-09T14:41:02.525Z] 21536.00 IOPS, 84.12 MiB/s [2024-12-09T14:41:03.457Z] 22101.33 IOPS, 86.33 MiB/s [2024-12-09T14:41:04.390Z] 22400.00 IOPS, 87.50 MiB/s [2024-12-09T14:41:04.390Z] 22092.80 IOPS, 86.30 MiB/s 00:08:26.268 Latency(us) 00:08:26.268 [2024-12-09T14:41:04.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.268 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x0 length 0xbd0bd 00:08:26.268 Nvme0n1 : 5.07 1578.52 6.17 0.00 0.00 80717.65 13208.02 83482.78 00:08:26.268 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:26.268 Nvme0n1 : 5.07 1527.46 5.97 0.00 0.00 83360.93 11443.59 87112.47 00:08:26.268 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x0 length 0x4ff80 00:08:26.268 Nvme1n1p1 : 5.07 1578.01 6.16 0.00 0.00 80620.23 12199.78 73400.32 00:08:26.268 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:26.268 Nvme1n1p1 : 5.09 1534.52 5.99 0.00 0.00 82928.60 13208.02 77836.60 00:08:26.268 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x0 length 0x4ff7f 00:08:26.268 Nvme1n1p2 : 5.07 1576.90 6.16 0.00 0.00 80572.24 13510.50 70577.23 00:08:26.268 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:26.268 Nvme1n1p2 : 5.09 1532.85 5.99 0.00 0.00 82756.66 17543.48 76626.71 00:08:26.268 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x0 length 0x80000 00:08:26.268 Nvme2n1 : 5.07 1576.44 6.16 0.00 0.00 80441.65 13611.32 64931.05 00:08:26.268 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x80000 length 0x80000 00:08:26.268 Nvme2n1 : 5.10 1532.45 5.99 0.00 0.00 82630.94 17543.48 72997.02 00:08:26.268 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x0 length 0x80000 00:08:26.268 Nvme2n2 : 5.09 1585.22 6.19 0.00 0.00 80039.07 9124.63 62914.56 00:08:26.268 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x80000 length 0x80000 00:08:26.268 Nvme2n2 : 5.10 1532.06 5.98 0.00 0.00 82481.31 16837.71 74206.92 00:08:26.268 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x0 length 0x80000 00:08:26.268 Nvme2n3 : 5.09 1584.76 6.19 0.00 0.00 79872.34 9729.58 65737.65 00:08:26.268 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x80000 length 0x80000 00:08:26.268 Nvme2n3 : 5.10 1531.65 5.98 0.00 0.00 82330.41 13510.50 77030.01 00:08:26.268 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x0 length 0x20000 00:08:26.268 Nvme3n1 : 5.09 1583.61 6.19 0.00 0.00 79730.04 11443.59 68964.04 00:08:26.268 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:26.268 Verification LBA range: start 0x20000 length 0x20000 00:08:26.268 Nvme3n1 : 5.10 1531.23 5.98 0.00 0.00 82253.69 10082.46 81869.59 00:08:26.268 [2024-12-09T14:41:04.390Z] =================================================================================================================== 00:08:26.268 [2024-12-09T14:41:04.390Z] Total : 21785.68 85.10 0.00 0.00 81462.75 9124.63 87112.47 00:08:27.640 00:08:27.640 real 0m7.343s 00:08:27.640 user 0m13.739s 00:08:27.640 sys 0m0.245s 00:08:27.640 14:41:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.640 14:41:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:27.640 ************************************ 00:08:27.640 END TEST bdev_verify 00:08:27.640 ************************************ 00:08:27.640 14:41:05 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.640 14:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:27.640 14:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.640 14:41:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.640 ************************************ 00:08:27.640 START TEST bdev_verify_big_io 00:08:27.640 ************************************ 00:08:27.640 14:41:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:27.640 [2024-12-09 14:41:05.677053] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:27.640 [2024-12-09 14:41:05.677167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63341 ] 00:08:27.898 [2024-12-09 14:41:05.835901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.898 [2024-12-09 14:41:05.953511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.898 [2024-12-09 14:41:05.953518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.829 Running I/O for 5 seconds... 00:08:34.953 1554.00 IOPS, 97.12 MiB/s [2024-12-09T14:41:13.334Z] 2943.00 IOPS, 183.94 MiB/s [2024-12-09T14:41:13.591Z] 3646.67 IOPS, 227.92 MiB/s 00:08:35.469 Latency(us) 00:08:35.469 [2024-12-09T14:41:13.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.469 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.469 Verification LBA range: start 0x0 length 0xbd0b 00:08:35.469 Nvme0n1 : 5.79 124.35 7.77 0.00 0.00 981548.43 15829.46 1193763.45 00:08:35.469 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.469 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:35.469 Nvme0n1 : 5.99 67.98 4.25 0.00 0.00 1746817.98 11695.66 2090699.22 00:08:35.469 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.469 Verification LBA range: start 0x0 length 0x4ff8 00:08:35.469 Nvme1n1p1 : 5.79 128.63 8.04 0.00 0.00 921207.58 94371.84 1025991.29 00:08:35.469 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.469 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:35.469 Nvme1n1p1 : 6.18 78.51 4.91 0.00 0.00 1461471.93 33070.47 1729343.80 00:08:35.469 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.469 Verification LBA range: start 0x0 length 0x4ff7 00:08:35.469 Nvme1n1p2 : 5.90 130.83 8.18 0.00 0.00 877221.17 116956.55 916294.10 00:08:35.469 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.469 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:35.470 Nvme1n1p2 : 6.21 82.95 5.18 0.00 0.00 1297259.03 40934.79 1619646.62 00:08:35.470 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x0 length 0x8000 00:08:35.470 Nvme2n1 : 5.90 133.66 8.35 0.00 0.00 841522.37 106470.79 877577.45 00:08:35.470 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x8000 length 0x8000 00:08:35.470 Nvme2n1 : 6.26 92.71 5.79 0.00 0.00 1102759.76 22483.89 1619646.62 00:08:35.470 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x0 length 0x8000 00:08:35.470 Nvme2n2 : 5.99 144.32 9.02 0.00 0.00 768305.51 32667.18 896935.78 00:08:35.470 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x8000 length 0x8000 00:08:35.470 Nvme2n2 : 6.40 124.33 7.77 0.00 0.00 790187.78 18854.20 1690627.15 00:08:35.470 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x0 length 0x8000 00:08:35.470 Nvme2n3 : 6.03 147.84 9.24 0.00 0.00 725593.27 44161.18 896935.78 00:08:35.470 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x8000 length 0x8000 00:08:35.470 Nvme2n3 : 6.66 188.72 11.80 0.00 0.00 499063.22 9880.81 2890843.37 00:08:35.470 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x0 length 0x2000 00:08:35.470 Nvme3n1 : 6.10 163.90 10.24 0.00 0.00 637933.44 4814.38 909841.33 00:08:35.470 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:35.470 Verification LBA range: start 0x2000 length 0x2000 00:08:35.470 Nvme3n1 : 6.88 313.69 19.61 0.00 0.00 289602.24 259.94 2903748.92 00:08:35.470 [2024-12-09T14:41:13.592Z] =================================================================================================================== 00:08:35.470 [2024-12-09T14:41:13.592Z] Total : 1922.42 120.15 0.00 0.00 775577.17 259.94 2903748.92 00:08:36.839 00:08:36.839 real 0m9.271s 00:08:36.839 user 0m17.545s 00:08:36.839 sys 0m0.265s 00:08:36.839 14:41:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.839 ************************************ 00:08:36.839 END TEST bdev_verify_big_io 00:08:36.839 ************************************ 00:08:36.839 14:41:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:36.839 14:41:14 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.839 14:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:36.839 14:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.839 14:41:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.839 ************************************ 00:08:36.839 START TEST bdev_write_zeroes 00:08:36.839 ************************************ 00:08:36.839 14:41:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.096 [2024-12-09 14:41:14.994237] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:37.096 [2024-12-09 14:41:14.994356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63461 ] 00:08:37.096 [2024-12-09 14:41:15.158493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.353 [2024-12-09 14:41:15.268981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.918 Running I/O for 1 seconds... 00:08:38.850 68992.00 IOPS, 269.50 MiB/s 00:08:38.850 Latency(us) 00:08:38.850 [2024-12-09T14:41:16.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.850 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:38.850 Nvme0n1 : 1.02 9827.41 38.39 0.00 0.00 12996.65 11594.83 23996.26 00:08:38.850 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:38.850 Nvme1n1p1 : 1.02 9815.31 38.34 0.00 0.00 12991.84 11292.36 23391.31 00:08:38.850 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:38.850 Nvme1n1p2 : 1.02 9803.34 38.29 0.00 0.00 12976.78 11292.36 22887.19 00:08:38.850 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:38.850 Nvme2n1 : 1.03 9792.29 38.25 0.00 0.00 12969.95 11141.12 22584.71 00:08:38.850 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:38.850 Nvme2n2 : 1.03 9781.28 38.21 0.00 0.00 12933.55 8166.79 22383.06 00:08:38.850 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:38.850 Nvme2n3 : 1.03 9770.26 38.17 0.00 0.00 12920.03 7108.14 22483.89 00:08:38.850 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:38.850 Nvme3n1 : 1.03 9759.26 38.12 0.00 0.00 12908.03 6326.74 23996.26 00:08:38.850 [2024-12-09T14:41:16.972Z] =================================================================================================================== 00:08:38.850 [2024-12-09T14:41:16.972Z] Total : 68549.16 267.77 0.00 0.00 12956.69 6326.74 23996.26 00:08:39.864 00:08:39.864 real 0m2.775s 00:08:39.864 user 0m2.459s 00:08:39.864 sys 0m0.203s 00:08:39.864 14:41:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.864 ************************************ 00:08:39.864 END TEST bdev_write_zeroes 00:08:39.864 ************************************ 00:08:39.864 14:41:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:39.864 14:41:17 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.864 14:41:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:39.864 14:41:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.864 14:41:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.864 ************************************ 00:08:39.864 START TEST bdev_json_nonenclosed 00:08:39.864 ************************************ 00:08:39.864 14:41:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.864 [2024-12-09 14:41:17.817714] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:39.864 [2024-12-09 14:41:17.817850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63509 ] 00:08:40.129 [2024-12-09 14:41:17.974199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.129 [2024-12-09 14:41:18.082914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.129 [2024-12-09 14:41:18.082996] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:40.129 [2024-12-09 14:41:18.083014] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:40.129 [2024-12-09 14:41:18.083024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.387 00:08:40.387 real 0m0.517s 00:08:40.387 user 0m0.308s 00:08:40.387 sys 0m0.104s 00:08:40.387 14:41:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.387 14:41:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:40.387 ************************************ 00:08:40.387 END TEST bdev_json_nonenclosed 00:08:40.387 ************************************ 00:08:40.387 14:41:18 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.387 14:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:40.387 14:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.387 14:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.387 ************************************ 00:08:40.387 START TEST bdev_json_nonarray 00:08:40.387 ************************************ 00:08:40.387 14:41:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.387 [2024-12-09 14:41:18.375506] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:40.387 [2024-12-09 14:41:18.375625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63540 ] 00:08:40.646 [2024-12-09 14:41:18.538816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.646 [2024-12-09 14:41:18.646881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.646 [2024-12-09 14:41:18.646981] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:40.646 [2024-12-09 14:41:18.647000] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:40.646 [2024-12-09 14:41:18.647010] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.903 00:08:40.903 real 0m0.521s 00:08:40.903 user 0m0.303s 00:08:40.903 sys 0m0.112s 00:08:40.903 14:41:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.903 14:41:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:40.903 ************************************ 00:08:40.903 END TEST bdev_json_nonarray 00:08:40.903 ************************************ 00:08:40.903 14:41:18 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:40.903 14:41:18 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:40.904 14:41:18 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:40.904 14:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.904 14:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.904 14:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.904 ************************************ 00:08:40.904 START TEST bdev_gpt_uuid 00:08:40.904 ************************************ 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63565 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63565 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63565 ']' 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.904 14:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:40.904 [2024-12-09 14:41:18.948124] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:40.904 [2024-12-09 14:41:18.948251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63565 ] 00:08:41.161 [2024-12-09 14:41:19.111214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.161 [2024-12-09 14:41:19.221897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.095 14:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.095 14:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:42.095 14:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.095 14:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.095 14:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.095 Some configs were skipped because the RPC state that can call them passed over. 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.095 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:42.353 { 00:08:42.353 "name": "Nvme1n1p1", 00:08:42.353 "aliases": [ 00:08:42.353 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:42.353 ], 00:08:42.353 "product_name": "GPT Disk", 00:08:42.353 "block_size": 4096, 00:08:42.353 "num_blocks": 655104, 00:08:42.353 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:42.353 "assigned_rate_limits": { 00:08:42.353 "rw_ios_per_sec": 0, 00:08:42.353 "rw_mbytes_per_sec": 0, 00:08:42.353 "r_mbytes_per_sec": 0, 00:08:42.353 "w_mbytes_per_sec": 0 00:08:42.353 }, 00:08:42.353 "claimed": false, 00:08:42.353 "zoned": false, 00:08:42.353 "supported_io_types": { 00:08:42.353 "read": true, 00:08:42.353 "write": true, 00:08:42.353 "unmap": true, 00:08:42.353 "flush": true, 00:08:42.353 "reset": true, 00:08:42.353 "nvme_admin": false, 00:08:42.353 "nvme_io": false, 00:08:42.353 "nvme_io_md": false, 00:08:42.353 "write_zeroes": true, 00:08:42.353 "zcopy": false, 00:08:42.353 "get_zone_info": false, 00:08:42.353 "zone_management": false, 00:08:42.353 "zone_append": false, 00:08:42.353 "compare": true, 00:08:42.353 "compare_and_write": false, 00:08:42.353 "abort": true, 00:08:42.353 "seek_hole": false, 00:08:42.353 "seek_data": false, 00:08:42.353 "copy": true, 00:08:42.353 "nvme_iov_md": false 00:08:42.353 }, 00:08:42.353 "driver_specific": { 00:08:42.353 "gpt": { 00:08:42.353 "base_bdev": "Nvme1n1", 00:08:42.353 "offset_blocks": 256, 00:08:42.353 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:42.353 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:42.353 "partition_name": "SPDK_TEST_first" 00:08:42.353 } 00:08:42.353 } 00:08:42.353 } 00:08:42.353 ]' 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:42.353 { 00:08:42.353 "name": "Nvme1n1p2", 00:08:42.353 "aliases": [ 00:08:42.353 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:42.353 ], 00:08:42.353 "product_name": "GPT Disk", 00:08:42.353 "block_size": 4096, 00:08:42.353 "num_blocks": 655103, 00:08:42.353 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:42.353 "assigned_rate_limits": { 00:08:42.353 "rw_ios_per_sec": 0, 00:08:42.353 "rw_mbytes_per_sec": 0, 00:08:42.353 "r_mbytes_per_sec": 0, 00:08:42.353 "w_mbytes_per_sec": 0 00:08:42.353 }, 00:08:42.353 "claimed": false, 00:08:42.353 "zoned": false, 00:08:42.353 "supported_io_types": { 00:08:42.353 "read": true, 00:08:42.353 "write": true, 00:08:42.353 "unmap": true, 00:08:42.353 "flush": true, 00:08:42.353 "reset": true, 00:08:42.353 "nvme_admin": false, 00:08:42.353 "nvme_io": false, 00:08:42.353 "nvme_io_md": false, 00:08:42.353 "write_zeroes": true, 00:08:42.353 "zcopy": false, 00:08:42.353 "get_zone_info": false, 00:08:42.353 "zone_management": false, 00:08:42.353 "zone_append": false, 00:08:42.353 "compare": true, 00:08:42.353 "compare_and_write": false, 00:08:42.353 "abort": true, 00:08:42.353 "seek_hole": false, 00:08:42.353 "seek_data": false, 00:08:42.353 "copy": true, 00:08:42.353 "nvme_iov_md": false 00:08:42.353 }, 00:08:42.353 "driver_specific": { 00:08:42.353 "gpt": { 00:08:42.353 "base_bdev": "Nvme1n1", 00:08:42.353 "offset_blocks": 655360, 00:08:42.353 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:42.353 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:42.353 "partition_name": "SPDK_TEST_second" 00:08:42.353 } 00:08:42.353 } 00:08:42.353 } 00:08:42.353 ]' 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:42.353 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63565 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63565 ']' 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63565 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63565 00:08:42.354 killing process with pid 63565 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63565' 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63565 00:08:42.354 14:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63565 00:08:44.253 00:08:44.253 real 0m3.190s 00:08:44.253 user 0m3.278s 00:08:44.253 sys 0m0.411s 00:08:44.253 14:41:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.253 14:41:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:44.253 ************************************ 00:08:44.253 END TEST bdev_gpt_uuid 00:08:44.253 ************************************ 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:44.253 14:41:22 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:44.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:44.510 Waiting for block devices as requested 00:08:44.510 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.768 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.768 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.768 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:50.118 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:50.118 14:41:27 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:50.118 14:41:27 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:50.118 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:50.118 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:50.119 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:50.119 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:50.119 14:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:50.119 00:08:50.119 real 0m56.676s 00:08:50.119 user 1m12.892s 00:08:50.119 sys 0m7.832s 00:08:50.119 14:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.119 ************************************ 00:08:50.119 14:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.119 END TEST blockdev_nvme_gpt 00:08:50.119 ************************************ 00:08:50.119 14:41:28 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:50.119 14:41:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.119 14:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.119 14:41:28 -- common/autotest_common.sh@10 -- # set +x 00:08:50.119 ************************************ 00:08:50.119 START TEST nvme 00:08:50.119 ************************************ 00:08:50.119 14:41:28 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:50.119 * Looking for test storage... 00:08:50.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:50.119 14:41:28 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.119 14:41:28 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.119 14:41:28 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.376 14:41:28 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.376 14:41:28 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.376 14:41:28 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.376 14:41:28 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.376 14:41:28 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.376 14:41:28 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.376 14:41:28 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.376 14:41:28 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.376 14:41:28 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:50.376 14:41:28 nvme -- scripts/common.sh@345 -- # : 1 00:08:50.376 14:41:28 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.376 14:41:28 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.376 14:41:28 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:50.376 14:41:28 nvme -- scripts/common.sh@353 -- # local d=1 00:08:50.376 14:41:28 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.376 14:41:28 nvme -- scripts/common.sh@355 -- # echo 1 00:08:50.376 14:41:28 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.376 14:41:28 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@353 -- # local d=2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.376 14:41:28 nvme -- scripts/common.sh@355 -- # echo 2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.376 14:41:28 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.376 14:41:28 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.376 14:41:28 nvme -- scripts/common.sh@368 -- # return 0 00:08:50.376 14:41:28 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.376 14:41:28 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.376 --rc genhtml_branch_coverage=1 00:08:50.376 --rc genhtml_function_coverage=1 00:08:50.376 --rc genhtml_legend=1 00:08:50.376 --rc geninfo_all_blocks=1 00:08:50.376 --rc geninfo_unexecuted_blocks=1 00:08:50.376 00:08:50.376 ' 00:08:50.376 14:41:28 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.376 --rc genhtml_branch_coverage=1 00:08:50.376 --rc genhtml_function_coverage=1 00:08:50.376 --rc genhtml_legend=1 00:08:50.376 --rc geninfo_all_blocks=1 00:08:50.376 --rc geninfo_unexecuted_blocks=1 00:08:50.376 00:08:50.376 ' 00:08:50.376 14:41:28 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.376 --rc genhtml_branch_coverage=1 00:08:50.376 --rc genhtml_function_coverage=1 00:08:50.376 --rc genhtml_legend=1 00:08:50.376 --rc geninfo_all_blocks=1 00:08:50.376 --rc geninfo_unexecuted_blocks=1 00:08:50.376 00:08:50.376 ' 00:08:50.376 14:41:28 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.376 --rc genhtml_branch_coverage=1 00:08:50.376 --rc genhtml_function_coverage=1 00:08:50.376 --rc genhtml_legend=1 00:08:50.376 --rc geninfo_all_blocks=1 00:08:50.376 --rc geninfo_unexecuted_blocks=1 00:08:50.376 00:08:50.376 ' 00:08:50.376 14:41:28 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:50.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:51.209 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:51.209 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:51.209 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:51.209 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:51.209 14:41:29 nvme -- nvme/nvme.sh@79 -- # uname 00:08:51.209 14:41:29 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:51.209 14:41:29 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:51.209 14:41:29 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:51.209 Waiting for stub to ready for secondary processes... 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1075 -- # stubpid=64200 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64200 ]] 00:08:51.209 14:41:29 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:51.469 [2024-12-09 14:41:29.348907] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:08:51.469 [2024-12-09 14:41:29.349025] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:52.038 [2024-12-09 14:41:30.121590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.297 [2024-12-09 14:41:30.218340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.297 [2024-12-09 14:41:30.218570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.297 [2024-12-09 14:41:30.218609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.297 [2024-12-09 14:41:30.232271] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:52.297 [2024-12-09 14:41:30.232406] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:52.297 [2024-12-09 14:41:30.240736] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:52.297 [2024-12-09 14:41:30.241051] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:52.297 [2024-12-09 14:41:30.245494] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:52.297 [2024-12-09 14:41:30.245894] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:52.297 [2024-12-09 14:41:30.246022] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:52.297 [2024-12-09 14:41:30.249577] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:52.297 [2024-12-09 14:41:30.249693] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:52.297 [2024-12-09 14:41:30.249737] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:52.297 [2024-12-09 14:41:30.251300] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:52.297 [2024-12-09 14:41:30.251417] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:52.297 [2024-12-09 14:41:30.251473] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:52.297 [2024-12-09 14:41:30.251506] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:52.297 [2024-12-09 14:41:30.251549] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:52.297 done. 00:08:52.297 14:41:30 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:52.297 14:41:30 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:52.297 14:41:30 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.297 14:41:30 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:52.297 14:41:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.297 14:41:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.297 ************************************ 00:08:52.297 START TEST nvme_reset 00:08:52.297 ************************************ 00:08:52.298 14:41:30 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.557 Initializing NVMe Controllers 00:08:52.557 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:52.557 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:52.557 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:52.557 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:52.557 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:52.557 00:08:52.557 real 0m0.207s 00:08:52.557 user 0m0.065s 00:08:52.557 sys 0m0.102s 00:08:52.557 14:41:30 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.557 14:41:30 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:52.557 ************************************ 00:08:52.557 END TEST nvme_reset 00:08:52.557 ************************************ 00:08:52.557 14:41:30 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:52.557 14:41:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.557 14:41:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.557 14:41:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.557 ************************************ 00:08:52.557 START TEST nvme_identify 00:08:52.557 ************************************ 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:52.557 14:41:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:52.557 14:41:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:52.557 14:41:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.557 14:41:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:52.557 14:41:30 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.557 14:41:30 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:52.817 ===================================================== 00:08:52.817 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.817 ===================================================== 00:08:52.817 Controller Capabilities/Features 00:08:52.817 ================================ 00:08:52.817 Vendor ID: 1b36 00:08:52.817 Subsystem Vendor ID: 1af4 00:08:52.817 Serial Number: 12340 00:08:52.817 Model Number: QEMU NVMe Ctrl 00:08:52.817 Firmware Version: 8.0.0 00:08:52.817 Recommended Arb Burst: 6 00:08:52.817 IEEE OUI Identifier: 00 54 52 00:08:52.817 Multi-path I/O 00:08:52.817 May have multiple subsystem ports: No 00:08:52.817 May have multiple controllers: No 00:08:52.817 Associated with SR-IOV VF: No 00:08:52.817 Max Data Transfer Size: 524288 00:08:52.817 Max Number of Namespaces: 256 00:08:52.817 Max Number of I/O Queues: 64 00:08:52.817 NVMe Specification Version (VS): 1.4 00:08:52.817 NVMe Specification Version (Identify): 1.4 00:08:52.817 Maximum Queue Entries: 2048 00:08:52.817 Contiguous Queues Required: Yes 00:08:52.817 Arbitration Mechanisms Supported 00:08:52.817 Weighted Round Robin: Not Supported 00:08:52.817 Vendor Specific: Not Supported 00:08:52.817 Reset Timeout: 7500 ms 00:08:52.817 Doorbell Stride: 4 bytes 00:08:52.817 NVM Subsystem Reset: Not Supported 00:08:52.817 Command Sets Supported 00:08:52.817 NVM Command Set: Supported 00:08:52.817 Boot Partition: Not Supported 00:08:52.817 Memory Page Size Minimum: 4096 bytes 00:08:52.817 Memory Page Size Maximum: 65536 bytes 00:08:52.817 Persistent Memory Region: Not Supported 00:08:52.817 Optional Asynchronous Events Supported 00:08:52.817 Namespace Attribute Notices: Supported 00:08:52.817 Firmware Activation Notices: Not Supported 00:08:52.817 ANA Change Notices: Not Supported 00:08:52.817 PLE Aggregate Log Change Notices: Not Supported 00:08:52.817 LBA Status Info Alert Notices: Not Supported 00:08:52.817 EGE Aggregate Log Change Notices: Not Supported 00:08:52.817 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.817 Zone Descriptor Change Notices: Not Supported 00:08:52.817 Discovery Log Change Notices: Not Supported 00:08:52.817 Controller Attributes 00:08:52.817 128-bit Host Identifier: Not Supported 00:08:52.817 Non-Operational Permissive Mode: Not Supported 00:08:52.817 NVM Sets: Not Supported 00:08:52.817 Read Recovery Levels: Not Supported 00:08:52.817 Endurance Groups: Not Supported 00:08:52.817 Predictable Latency Mode: Not Supported 00:08:52.817 Traffic Based Keep ALive: Not Supported 00:08:52.817 Namespace Granularity: Not Supported 00:08:52.817 SQ Associations: Not Supported 00:08:52.817 UUID List: Not Supported 00:08:52.817 Multi-Domain Subsystem: Not Supported 00:08:52.817 Fixed Capacity Management: Not Supported 00:08:52.817 Variable Capacity Management: Not Supported 00:08:52.817 Delete Endurance Group: Not Supported 00:08:52.817 Delete NVM Set: Not Supported 00:08:52.817 Extended LBA Formats Supported: Supported 00:08:52.817 Flexible Data Placement Supported: Not Supported 00:08:52.817 00:08:52.817 Controller Memory Buffer Support 00:08:52.817 ================================ 00:08:52.817 Supported: No 00:08:52.817 00:08:52.817 Persistent Memory Region Support 00:08:52.817 ================================ 00:08:52.817 Supported: No 00:08:52.817 00:08:52.817 Admin Command Set Attributes 00:08:52.817 ============================ 00:08:52.817 Security Send/Receive: Not Supported 00:08:52.817 Format NVM: Supported 00:08:52.817 Firmware Activate/Download: Not Supported 00:08:52.817 Namespace Management: Supported 00:08:52.817 Device Self-Test: Not Supported 00:08:52.817 Directives: Supported 00:08:52.817 NVMe-MI: Not Supported 00:08:52.817 Virtualization Management: Not Supported 00:08:52.817 Doorbell Buffer Config: Supported 00:08:52.817 Get LBA Status Capability: Not Supported 00:08:52.817 Command & Feature Lockdown Capability: Not Supported 00:08:52.817 Abort Command Limit: 4 00:08:52.817 Async Event Request Limit: 4 00:08:52.817 Number of Firmware Slots: N/A 00:08:52.817 Firmware Slot 1 Read-Only: N/A 00:08:52.817 Firmware Activation Without Reset: N/A 00:08:52.817 Multiple Update Detection Support: N/A 00:08:52.817 Firmware Update Granularity: No Information Provided 00:08:52.817 Per-Namespace SMART Log: Yes 00:08:52.817 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.817 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:52.817 Command Effects Log Page: Supported 00:08:52.817 Get Log Page Extended Data: Supported 00:08:52.817 Telemetry Log Pages: Not Supported 00:08:52.817 Persistent Event Log Pages: Not Supported 00:08:52.817 Supported Log Pages Log Page: May Support 00:08:52.817 Commands Supported & Effects Log Page: Not Supported 00:08:52.817 Feature Identifiers & Effects Log Page:May Support 00:08:52.817 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.817 Data Area 4 for Telemetry Log: Not Supported 00:08:52.817 Error Log Page Entries Supported: 1 00:08:52.817 Keep Alive: Not Supported 00:08:52.817 00:08:52.817 NVM Command Set Attributes 00:08:52.817 ========================== 00:08:52.817 Submission Queue Entry Size 00:08:52.817 Max: 64 00:08:52.817 Min: 64 00:08:52.817 Completion Queue Entry Size 00:08:52.817 Max: 16 00:08:52.817 Min: 16 00:08:52.817 Number of Namespaces: 256 00:08:52.817 Compare Command: Supported 00:08:52.817 Write Uncorrectable Command: Not Supported 00:08:52.817 Dataset Management Command: Supported 00:08:52.817 Write Zeroes Command: Supported 00:08:52.817 Set Features Save Field: Supported 00:08:52.817 Reservations: Not Supported 00:08:52.818 Timestamp: Supported 00:08:52.818 Copy: Supported 00:08:52.818 Volatile Write Cache: Present 00:08:52.818 Atomic Write Unit (Normal): 1 00:08:52.818 Atomic Write Unit (PFail): 1 00:08:52.818 Atomic Compare & Write Unit: 1 00:08:52.818 Fused Compare & Write: Not Supported 00:08:52.818 Scatter-Gather List 00:08:52.818 SGL Command Set: Supported 00:08:52.818 SGL Keyed: Not Supported 00:08:52.818 SGL Bit Bucket Descriptor: Not Supported 00:08:52.818 SGL Metadata Pointer: Not Supported 00:08:52.818 Oversized SGL: Not Supported 00:08:52.818 SGL Metadata Address: Not Supported 00:08:52.818 SGL Offset: Not Supported 00:08:52.818 Transport SGL Data Block: Not Supported 00:08:52.818 Replay Protected Memory Block: Not Supported 00:08:52.818 00:08:52.818 Firmware Slot Information 00:08:52.818 ========================= 00:08:52.818 Active slot: 1 00:08:52.818 Slot 1 Firmware Revision: 1.0 00:08:52.818 00:08:52.818 00:08:52.818 Commands Supported and Effects 00:08:52.818 ============================== 00:08:52.818 Admin Commands 00:08:52.818 -------------- 00:08:52.818 Delete I/O Submission Queue (00h): Supported 00:08:52.818 Create I/O Submission Queue (01h): Supported 00:08:52.818 Get Log Page (02h): Supported 00:08:52.818 Delete I/O Completion Queue (04h): Supported 00:08:52.818 Create I/O Completion Queue (05h): Supported 00:08:52.818 Identify (06h): Supported 00:08:52.818 Abort (08h): Supported 00:08:52.818 Set Features (09h): Supported 00:08:52.818 Get Features (0Ah): Supported 00:08:52.818 Asynchronous Event Request (0Ch): Supported 00:08:52.818 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.818 Directive Send (19h): Supported 00:08:52.818 Directive Receive (1Ah): Supported 00:08:52.818 Virtualization Management (1Ch): Supported 00:08:52.818 Doorbell Buffer Config (7Ch): Supported 00:08:52.818 Format NVM (80h): Supported LBA-Change 00:08:52.818 I/O Commands 00:08:52.818 ------------ 00:08:52.818 Flush (00h): Supported LBA-Change 00:08:52.818 Write (01h): Supported LBA-Change 00:08:52.818 Read (02h): Supported 00:08:52.818 Compare (05h): Supported 00:08:52.818 Write Zeroes (08h): Supported LBA-Change 00:08:52.818 Dataset Management (09h): Supported LBA-Change 00:08:52.818 Unknown (0Ch): Supported 00:08:52.818 Unknown (12h): Supported 00:08:52.818 Copy (19h): Supported LBA-Change 00:08:52.818 Unknown (1Dh): Supported LBA-Change 00:08:52.818 00:08:52.818 Error Log 00:08:52.818 ========= 00:08:52.818 00:08:52.818 Arbitration 00:08:52.818 =========== 00:08:52.818 Arbitration Burst: no limit 00:08:52.818 00:08:52.818 Power Management 00:08:52.818 ================ 00:08:52.818 Number of Power States: 1 00:08:52.818 Current Power State: Power State #0 00:08:52.818 Power State #0: 00:08:52.818 Max Power: 25.00 W 00:08:52.818 Non-Operational State: Operational 00:08:52.818 Entry Latency: 16 microseconds 00:08:52.818 Exit Latency: 4 microseconds 00:08:52.818 Relative Read Throughput: 0 00:08:52.818 Relative Read Latency: 0 00:08:52.818 Relative Write Throughput: 0 00:08:52.818 Relative Write Latency: 0 00:08:52.818 Idle Power: Not Reported 00:08:52.818 Active Power: Not Reported 00:08:52.818 Non-Operational Permissive Mode: Not Supported 00:08:52.818 00:08:52.818 Health Information 00:08:52.818 ================== 00:08:52.818 Critical Warnings: 00:08:52.818 Available Spare Space: OK 00:08:52.818 Temperature: OK 00:08:52.818 Device Reliability: OK 00:08:52.818 Read Only: No 00:08:52.818 Volatile Memory Backup: OK 00:08:52.818 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.818 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.818 Available Spare: 0% 00:08:52.818 Available Spare Threshold: 0% 00:08:52.818 Life Percentage Used: 0% 00:08:52.818 Data Units Read: 704 00:08:52.818 Data Units Written: 632 00:08:52.818 Host Read Commands: 40922 00:08:52.818 Host Write Commands: 40708 00:08:52.818 Controller Busy Time: 0 minutes 00:08:52.818 Power Cycles: 0 00:08:52.818 Power On Hours: 0 hours 00:08:52.818 Unsafe Shutdowns: 0 00:08:52.818 Unrecoverable Media Errors: 0 00:08:52.818 Lifetime Error Log Entries: 0 00:08:52.818 Warning Temperature Time: 0 minutes 00:08:52.818 Critical Temperature Time: 0 minutes 00:08:52.818 00:08:52.818 Number of Queues 00:08:52.818 ================ 00:08:52.818 Number of I/O Submission Queues: 64 00:08:52.818 Number of I/O Completion Queues: 64 00:08:52.818 00:08:52.818 ZNS Specific Controller Data 00:08:52.818 ============================ 00:08:52.818 Zone Append Size Limit: 0 00:08:52.818 00:08:52.818 00:08:52.818 Active Namespaces 00:08:52.818 ================= 00:08:52.818 Namespace ID:1 00:08:52.818 Error Recovery Timeout: Unlimited 00:08:52.818 Command Set Identifier: NVM (00h) 00:08:52.818 Deallocate: Supported 00:08:52.818 Deallocated/Unwritten Error: Supported 00:08:52.818 Deallocated Read Value: All 0x00 00:08:52.818 Deallocate in Write Zeroes: Not Supported 00:08:52.818 Deallocated Guard Field: 0xFFFF 00:08:52.818 Flush: Supported 00:08:52.818 Reservation: Not Supported 00:08:52.818 Metadata Transferred as: Separate Metadata Buffer 00:08:52.818 Namespace Sharing Capabilities: Private 00:08:52.818 Size (in LBAs): 1548666 (5GiB) 00:08:52.818 Capacity (in LBAs): 1548666 (5GiB) 00:08:52.818 Utilization (in LBAs): 1548666 (5GiB) 00:08:52.818 Thin Provisioning: Not Supported 00:08:52.818 Per-NS Atomic Units: No 00:08:52.818 Maximum Single Source Range Length: 128 00:08:52.818 Maximum Copy Length: 128 00:08:52.818 Maximum Source Range Count: 128 00:08:52.818 NGUID/EUI64 Never Reused: No 00:08:52.818 Namespace Write Protected: No 00:08:52.818 Number of LBA Formats: 8 00:08:52.818 Current LBA Format: LBA Format #07 00:08:52.818 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.818 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.818 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.818 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.818 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.818 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.818 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.818 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.818 00:08:52.818 NVM Specific Namespace Data 00:08:52.818 =========================== 00:08:52.818 Logical Block Storage Tag Mask: 0 00:08:52.818 Protection Information Capabilities: 00:08:52.818 16b Guard Protection Information Storage Tag Support: No 00:08:52.818 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.818 Storage Tag Check Read Support: No 00:08:52.818 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.818 ===================================================== 00:08:52.818 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.818 ===================================================== 00:08:52.818 Controller Capabilities/Features 00:08:52.818 ================================ 00:08:52.818 Vendor ID: 1b36 00:08:52.818 Subsystem Vendor ID: 1af4 00:08:52.818 Serial Number: 12341 00:08:52.818 Model Number: QEMU NVMe Ctrl 00:08:52.818 Firmware Version: 8.0.0 00:08:52.818 Recommended Arb Burst: 6 00:08:52.818 IEEE OUI Identifier: 00 54 52 00:08:52.818 Multi-path I/O 00:08:52.818 May have multiple subsystem ports: No 00:08:52.818 May have multiple controllers: No 00:08:52.818 Associated with SR-IOV VF: No 00:08:52.818 Max Data Transfer Size: 524288 00:08:52.818 Max Number of Namespaces: 256 00:08:52.818 Max Number of I/O Queues: 64 00:08:52.818 NVMe Specification Version (VS): 1.4 00:08:52.818 NVMe Specification Version (Identify): 1.4 00:08:52.818 Maximum Queue Entries: 2048 00:08:52.818 Contiguous Queues Required: Yes 00:08:52.818 Arbitration Mechanisms Supported 00:08:52.818 Weighted Round Robin: Not Supported 00:08:52.818 Vendor Specific: Not Supported 00:08:52.818 Reset Timeout: 7500 ms 00:08:52.818 Doorbell Stride: 4 bytes 00:08:52.818 NVM Subsystem Reset: Not Supported 00:08:52.818 Command Sets Supported 00:08:52.818 NVM Command Set: Supported 00:08:52.818 Boot Partition: Not Supported 00:08:52.818 Memory Page Size Minimum: 4096 bytes 00:08:52.818 Memory Page Size Maximum: 65536 bytes 00:08:52.818 Persistent Memory Region: Not Supported 00:08:52.818 Optional Asynchronous Events Supported 00:08:52.818 Namespace Attribute Notices: Supported 00:08:52.818 Firmware Activation Notices: Not Supported 00:08:52.818 ANA Change Notices: Not Supported 00:08:52.818 PLE Aggregate Log Change Notices: Not Supported 00:08:52.818 LBA Status Info Alert Notices: Not Supported 00:08:52.819 EGE Aggregate Log Change Notices: Not Supported 00:08:52.819 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.819 Zone Descriptor Change Notices: Not Supported 00:08:52.819 Discovery Log Change Notices: Not Supported 00:08:52.819 Controller Attributes 00:08:52.819 128-bit Host Identifier: Not Supported 00:08:52.819 Non-Operational Permissive Mode: Not Supported 00:08:52.819 NVM Sets: Not Supported 00:08:52.819 Read Recovery Levels: Not Supported 00:08:52.819 Endurance Groups: Not Supported 00:08:52.819 Predictable Latency Mode: Not Supported 00:08:52.819 Traffic Based Keep ALive: Not Supported 00:08:52.819 Namespace Granularity: Not Supported 00:08:52.819 SQ Associations: Not Supported 00:08:52.819 UUID List: Not Supported 00:08:52.819 Multi-Domain Subsystem: Not Supported 00:08:52.819 Fixed Capacity Management: Not Supported 00:08:52.819 Variable Capacity Management: Not Supported 00:08:52.819 Delete Endurance Group: Not Supported 00:08:52.819 Delete NVM Set: Not Supported 00:08:52.819 Extended LBA Formats Supported: Supported 00:08:52.819 Flexible Data Placement Supported: Not Supported 00:08:52.819 00:08:52.819 Controller Memory Buffer Support 00:08:52.819 ================================ 00:08:52.819 Supported: No 00:08:52.819 00:08:52.819 Persistent Memory Region Support 00:08:52.819 ================================ 00:08:52.819 Supported: No 00:08:52.819 00:08:52.819 Admin Command Set Attributes 00:08:52.819 ============================ 00:08:52.819 Security Send/Receive: Not Supported 00:08:52.819 Format NVM: Supported 00:08:52.819 Firmware Activate/Download: Not Supported 00:08:52.819 Namespace Management: Supported 00:08:52.819 Device Self-Test: Not Supported 00:08:52.819 Directives: Supported 00:08:52.819 NVMe-MI: Not Supported 00:08:52.819 Virtualization Management: Not Supported 00:08:52.819 Doorbell Buffer Config: Supported 00:08:52.819 Get LBA Status Capability: Not Supported 00:08:52.819 Command & Feature Lockdown Capability: Not Supported 00:08:52.819 Abort Command Limit: 4 00:08:52.819 Async Event Request Limit: 4 00:08:52.819 Number of Firmware Slots: N/A 00:08:52.819 Firmware Slot 1 Read-Only: N/A 00:08:52.819 Firmware Activation Without Reset: N/A 00:08:52.819 Multiple Update Detection Support: N/A 00:08:52.819 Firmware Update Granularity: No Information Provided 00:08:52.819 Per-Namespace SMART Log: Yes 00:08:52.819 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.819 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:52.819 Command Effects Log Page: Supported 00:08:52.819 Get Log Page Extended Data: Supported 00:08:52.819 Telemetry Log Pages: Not Supported 00:08:52.819 Persistent Event Log Pages: Not Supported 00:08:52.819 Supported Log Pages Log Page: May Support 00:08:52.819 Commands Supported & Effects Log Page: Not Supported 00:08:52.819 Feature Identifiers & Effects Log Page:May Support 00:08:52.819 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.819 Data Area 4 for Telemetry Log: Not Supported 00:08:52.819 Error Log Page Entries Supported: 1 00:08:52.819 Keep Alive: Not Supported 00:08:52.819 00:08:52.819 NVM Command Set Attributes 00:08:52.819 ========================== 00:08:52.819 Submission Queue Entry Size 00:08:52.819 Max: 64 00:08:52.819 Min: 64 00:08:52.819 Completion Queue Entry Size 00:08:52.819 Max: 16 00:08:52.819 Min: 16 00:08:52.819 Number of Namespaces: 256 00:08:52.819 Compare Command: Supported 00:08:52.819 Write Uncorrectable Command: Not Supported 00:08:52.819 Dataset Management Command: Supported 00:08:52.819 Write Zeroes Command: Supported 00:08:52.819 Set Features Save Field: Supported 00:08:52.819 Reservations: Not Supported 00:08:52.819 Timestamp: Supported 00:08:52.819 Copy: Supported 00:08:52.819 Volatile Write Cache: Present 00:08:52.819 Atomic Write Unit (Normal): 1 00:08:52.819 Atomic Write Unit (PFail): 1 00:08:52.819 Atomic Compare & Write Unit: 1 00:08:52.819 Fused Compare & Write: Not Supported 00:08:52.819 Scatter-Gather List 00:08:52.819 SGL Command Set: Supported 00:08:52.819 SGL Keyed: Not Supported 00:08:52.819 SGL Bit Bucket Descriptor: Not Supported 00:08:52.819 SGL Metadata Pointer: Not Supported 00:08:52.819 Oversized SGL: Not Supported 00:08:52.819 SGL Metadata Address: Not Supported 00:08:52.819 SGL Offset: Not Supported 00:08:52.819 Transport SGL Data Block: Not Supported 00:08:52.819 Replay Protected Memory Block: Not Supported 00:08:52.819 00:08:52.819 Firmware Slot Information 00:08:52.819 ========================= 00:08:52.819 Active slot: 1 00:08:52.819 Slot 1 Firmware Revision: 1.0 00:08:52.819 00:08:52.819 00:08:52.819 Commands Supported and Effects 00:08:52.819 ============================== 00:08:52.819 Admin Commands 00:08:52.819 -------------- 00:08:52.819 Delete I/O Submission Queue (00h): Supported 00:08:52.819 Create I/O Submission Queue (01h): Supported 00:08:52.819 Get Log Page (02h): Supported 00:08:52.819 Delete I/O Completion Queue (04h): Supported 00:08:52.819 Create I/O Completion Queue (05h): Supported 00:08:52.819 Identify (06h): Supported 00:08:52.819 Abort (08h): Supported 00:08:52.819 Set Features (09h): Supported 00:08:52.819 Get Features (0Ah): Supported 00:08:52.819 Asynchronous Event Request (0Ch): Supported 00:08:52.819 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.819 Directive Send (19h): Supported 00:08:52.819 Directive Receive (1Ah): Supported 00:08:52.819 Virtualization Management (1Ch): Supported 00:08:52.819 Doorbell Buffer Config (7Ch): Supported 00:08:52.819 Format NVM (80h): Supported LBA-Change 00:08:52.819 I/O Commands 00:08:52.819 ------------ 00:08:52.819 Flush (00h): Supported LBA-Change 00:08:52.819 Write (01h): Supported LBA-Change 00:08:52.819 Read (02h): Supported 00:08:52.819 Compare (05h): Supported 00:08:52.819 Write Zeroes (08h): Supported LBA-Change 00:08:52.819 Dataset Management (09h): Supported LBA-Change 00:08:52.819 Unknown (0Ch): Supported 00:08:52.819 Unknown (12h): Supported 00:08:52.819 Copy (19h): Supported LBA-Change 00:08:52.819 Unknown (1Dh): Supported LBA-Change 00:08:52.819 00:08:52.819 Error Log 00:08:52.819 ========= 00:08:52.819 00:08:52.819 Arbitration 00:08:52.819 =========== 00:08:52.819 Arbitration Burst: no limit 00:08:52.819 00:08:52.819 Power Management 00:08:52.819 ================ 00:08:52.819 Number of Power States: 1 00:08:52.819 Current Power State: Power State #0 00:08:52.819 Power State #0: 00:08:52.819 Max Power: 25.00 W 00:08:52.819 Non-Operational State: Operational 00:08:52.819 Entry Latency: 16 microseconds 00:08:52.819 Exit Latency: 4 microseconds 00:08:52.819 Relative Read Throughput: 0 00:08:52.819 Relative Read Latency: 0 00:08:52.819 Relative Write Throughput: 0 00:08:52.819 Relative Write Latency: 0 00:08:52.819 Idle Power: Not Reported 00:08:52.819 Active Power: Not Reported 00:08:52.819 Non-Operational Permissive Mode: Not Supported 00:08:52.819 00:08:52.819 Health Information 00:08:52.819 ================== 00:08:52.819 Critical Warnings: 00:08:52.819 Available Spare Space: OK 00:08:52.819 Temperature: OK 00:08:52.819 Device Reliability: OK 00:08:52.819 Read Only: No 00:08:52.819 Volatile Memory Backup: OK 00:08:52.819 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.819 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.819 Available Spare: 0% 00:08:52.819 Available Spare Threshold: 0% 00:08:52.819 Life Percentage Used: 0% 00:08:52.819 Data Units Read: 1075 00:08:52.819 Data Units Written: 942 00:08:52.819 Host Read Commands: 59220 00:08:52.819 Host Write Commands: 58010 00:08:52.819 Controller Busy Time: 0 minutes 00:08:52.819 Power Cycles: 0 00:08:52.819 Power On Hours: 0 hours 00:08:52.819 Unsafe Shutdowns: 0 00:08:52.819 Unrecoverable Media Errors: 0 00:08:52.819 Lifetime Error Log Entries: 0 00:08:52.819 Warning Temperature Time: 0 minutes 00:08:52.819 Critical Temperature Time: 0 minutes 00:08:52.819 00:08:52.819 Number of Queues 00:08:52.819 ================ 00:08:52.819 Number of I/O Submission Queues: 64 00:08:52.819 Number of I/O Completion Queues: 64 00:08:52.819 00:08:52.819 ZNS Specific Controller Data 00:08:52.819 ============================ 00:08:52.819 Zone Append Size Limit: 0 00:08:52.819 00:08:52.819 00:08:52.819 Active Namespaces 00:08:52.819 ================= 00:08:52.819 Namespace ID:1 00:08:52.819 Error Recovery Timeout: Unlimited 00:08:52.819 Command Set Identifier: NVM (00h) 00:08:52.819 Deallocate: Supported 00:08:52.819 Deallocated/Unwritten Error: Supported 00:08:52.819 Deallocated Read Value: All 0x00 00:08:52.819 Deallocate in Write Zeroes: Not Supported 00:08:52.819 Deallocated Guard Field: 0xFFFF 00:08:52.819 Flush: Supported 00:08:52.819 Reservation: Not Supported 00:08:52.819 Namespace Sharing Capabilities: Private 00:08:52.819 Size (in LBAs): 1310720 (5GiB) 00:08:52.819 Capacity (in LBAs): 1310720 (5GiB) 00:08:52.819 Utilization (in LBAs): 1310720 (5GiB) 00:08:52.819 Thin Provisioning: Not Supported 00:08:52.819 Per-NS Atomic Units: No 00:08:52.820 Maximum Single Source Range Length: 128 00:08:52.820 Maximum Copy Length: 128 00:08:52.820 Maximum Source Range Count: 128 00:08:52.820 NGUID/EUI64 Never Reused: No 00:08:52.820 Namespace Write Protected: No 00:08:52.820 Number of LBA Formats: 8 00:08:52.820 Current LBA Format: LBA Format #04 00:08:52.820 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.820 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.820 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.820 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.820 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.820 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.820 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.820 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.820 00:08:52.820 NVM Specific Namespace Data 00:08:52.820 =========================== 00:08:52.820 Logical Block Storage Tag Mask: 0 00:08:52.820 Protection Information Capabilities: 00:08:52.820 16b Guard Protection Information Storage Tag Support: No 00:08:52.820 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.820 Storage Tag Check Read Support: No 00:08:52.820 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.820 ===================================================== 00:08:52.820 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.820 ===================================================== 00:08:52.820 Controller Capabilities/Features 00:08:52.820 ================================ 00:08:52.820 Vendor ID: 1b36 00:08:52.820 Subsystem Vendor ID: 1af4 00:08:52.820 Serial Number: 12343 00:08:52.820 Model Number: QEMU NVMe Ctrl 00:08:52.820 Firmware Version: 8.0.0 00:08:52.820 Recommended Arb Burst: 6 00:08:52.820 IEEE OUI Identifier: 00 54 52 00:08:52.820 Multi-path I/O 00:08:52.820 May have multiple subsystem ports: No 00:08:52.820 May have multiple controllers: Yes 00:08:52.820 Associated with SR-IOV VF: No 00:08:52.820 Max Data Transfer Size: 524288 00:08:52.820 Max Number of Namespaces: 256 00:08:52.820 Max Number of I/O Queues: 64 00:08:52.820 NVMe Specification Version (VS): 1.4 00:08:52.820 NVMe Specification Version (Identify): 1.4 00:08:52.820 Maximum Queue Entries: 2048 00:08:52.820 Contiguous Queues Required: Yes 00:08:52.820 Arbitration Mechanisms Supported 00:08:52.820 Weighted Round Robin: Not Supported 00:08:52.820 Vendor Specific: Not Supported 00:08:52.820 Reset Timeout: 7500 ms 00:08:52.820 Doorbell Stride: 4 bytes 00:08:52.820 NVM Subsystem Reset: Not Supported 00:08:52.820 Command Sets Supported 00:08:52.820 NVM Command Set: Supported 00:08:52.820 Boot Partition: Not Supported 00:08:52.820 Memory Page Size Minimum: 4096 bytes 00:08:52.820 Memory Page Size Maximum: 65536 bytes 00:08:52.820 Persistent Memory Region: Not Supported 00:08:52.820 Optional Asynchronous Events Supported 00:08:52.820 Namespace Attribute Notices: Supported 00:08:52.820 Firmware Activation Notices: Not Supported 00:08:52.820 ANA Change Notices: Not Supported 00:08:52.820 PLE Aggregate Log Change Notices: Not Supported 00:08:52.820 LBA Status Info Alert Notices: Not Supported 00:08:52.820 EGE Aggregate Log Change Notices: Not Supported 00:08:52.820 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.820 Zone Descriptor Change Notices: Not Supported 00:08:52.820 Discovery Log Change Notices: Not Supported 00:08:52.820 Controller Attributes 00:08:52.820 128-bit Host Identifier: Not Supported 00:08:52.820 Non-Operational Permissive Mode: Not Supported 00:08:52.820 NVM Sets: Not Supported 00:08:52.820 Read Recovery Levels: Not Supported 00:08:52.820 Endurance Groups: Supported 00:08:52.820 Predictable Latency Mode: Not Supported 00:08:52.820 Traffic Based Keep ALive: Not Supported 00:08:52.820 Namespace Granularity: Not Supported 00:08:52.820 SQ Associations: Not Supported 00:08:52.820 UUID List: Not Supported 00:08:52.820 Multi-Domain Subsystem: Not Supported 00:08:52.820 Fixed Capacity Management: Not Supported 00:08:52.820 Variable Capacity Management: Not Supported 00:08:52.820 Delete Endurance Group: Not Supported 00:08:52.820 Delete NVM Set: Not Supported 00:08:52.820 Extended LBA Formats Supported: Supported 00:08:52.820 Flexible Data Placement Supported: Supported 00:08:52.820 00:08:52.820 Controller Memory Buffer Support 00:08:52.820 ================================ 00:08:52.820 Supported: No 00:08:52.820 00:08:52.820 Persistent Memory Region Support 00:08:52.820 ================================ 00:08:52.820 Supported: No 00:08:52.820 00:08:52.820 Admin Command Set Attributes 00:08:52.820 ============================ 00:08:52.820 Security Send/Receive: Not Supported 00:08:52.820 Format NVM: Supported 00:08:52.820 Firmware Activate/Download: Not Supported 00:08:52.820 Namespace Management: Supported 00:08:52.820 Device Self-Test: Not Supported 00:08:52.820 Directives: Supported 00:08:52.820 NVMe-MI: Not Supported 00:08:52.820 Virtualization Management: Not Supported 00:08:52.820 Doorbell Buffer Config: Supported 00:08:52.820 Get LBA Status Capability: Not Supported 00:08:52.820 Command & Feature Lockdown Capability: Not Supported 00:08:52.820 Abort Command Limit: 4 00:08:52.820 Async Event Request Limit: 4 00:08:52.820 Number of Firmware Slots: N/A 00:08:52.820 Firmware Slot 1 Read-Only: N/A 00:08:52.820 Firmware Activation Without Reset: N/A 00:08:52.820 Multiple Update Detection Support: N/A 00:08:52.820 Firmware Update Granularity: No Information Provided 00:08:52.820 Per-Namespace SMART Log: Yes 00:08:52.820 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.820 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.820 Command Effects Log Page: Supported 00:08:52.820 Get Log Page Extended Data: Supported 00:08:52.820 Telemetry Log Pages: Not Supported 00:08:52.820 Persistent Event Log Pages: Not Supported 00:08:52.820 Supported Log Pages Log Page: May Support 00:08:52.820 Commands Supported & Effects Log Page: Not Supported 00:08:52.820 Feature Identifiers & Effects Log Page:May Support 00:08:52.820 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.820 Data Area 4 for Telemetry Log: Not Supported 00:08:52.820 Error Log Page Entries Supported: 1 00:08:52.820 Keep Alive: Not Supported 00:08:52.820 00:08:52.820 NVM Command Set Attributes 00:08:52.820 ========================== 00:08:52.820 Submission Queue Entry Size 00:08:52.820 Max: 64 00:08:52.820 Min: 64 00:08:52.820 Completion Queue Entry Size 00:08:52.820 Max: 16 00:08:52.820 Min: 16 00:08:52.820 Number of Namespaces: 256 00:08:52.820 Compare Command: Supported 00:08:52.820 Write Uncorrectable Command: Not Supported 00:08:52.820 Dataset Management Command: Supported 00:08:52.820 Write Zeroes Command: Supported 00:08:52.820 Set Features Save Field: Supported 00:08:52.820 Reservations: Not Supported 00:08:52.820 Timestamp: Supported 00:08:52.820 Copy: Supported 00:08:52.820 Volatile Write Cache: Present 00:08:52.820 Atomic Write Unit (Normal): 1 00:08:52.820 Atomic Write Unit (PFail): 1 00:08:52.820 Atomic Compare & Write Unit: 1 00:08:52.820 Fused Compare & Write: Not Supported 00:08:52.820 Scatter-Gather List 00:08:52.820 SGL Command Set: Supported 00:08:52.820 SGL Keyed: Not Supported 00:08:52.820 SGL Bit Bucket Descriptor: Not Supported 00:08:52.820 SGL Metadata Pointer: Not Supported 00:08:52.820 Oversized SGL: Not Supported 00:08:52.820 SGL Metadata Address: Not Supported 00:08:52.820 SGL Offset: Not Supported 00:08:52.820 Transport SGL Data Block: Not Supported 00:08:52.820 Replay Protected Memory Block: Not Supported 00:08:52.820 00:08:52.820 Firmware Slot Information 00:08:52.820 ========================= 00:08:52.820 Active slot: 1 00:08:52.820 Slot 1 Firmware Revision: 1.0 00:08:52.820 00:08:52.820 00:08:52.820 Commands Supported and Effects 00:08:52.820 ============================== 00:08:52.820 Admin Commands 00:08:52.820 -------------- 00:08:52.820 Delete I/O Submission Queue (00h): Supported 00:08:52.820 Create I/O Submission Queue (01h): Supported 00:08:52.820 Get Log Page (02h): Supported 00:08:52.820 Delete I/O Completion Queue (04h): Supported 00:08:52.820 Create I/O Completion Queue (05h): Supported 00:08:52.820 Identify (06h): Supported 00:08:52.820 Abort (08h): Supported 00:08:52.820 Set Features (09h): Supported 00:08:52.820 Get Features (0Ah): Supported 00:08:52.820 Asynchronous Event Request (0Ch): Supported 00:08:52.820 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.820 Directive Send (19h): Supported 00:08:52.820 Directive Receive (1Ah): Supported 00:08:52.821 Virtualization Management (1Ch): Supported 00:08:52.821 Doorbell Buffer Config (7Ch): Supported 00:08:52.821 Format NVM (80h): Supported LBA-Change 00:08:52.821 I/O Commands 00:08:52.821 ------------ 00:08:52.821 Flush (00h): Supported LBA-Change 00:08:52.821 Write (01h): Supported LBA-Change 00:08:52.821 Read (02h): Supported 00:08:52.821 Compare (05h): Supported 00:08:52.821 Write Zeroes (08h): Supported LBA-Change 00:08:52.821 Dataset Management (09h): Supported LBA-Change 00:08:52.821 Unknown (0Ch): Supported 00:08:52.821 Unknown (12h): Supported 00:08:52.821 Copy (19h): Supported LBA-Change 00:08:52.821 Unknown (1Dh): Supported LBA-Change 00:08:52.821 00:08:52.821 Error Log 00:08:52.821 ========= 00:08:52.821 00:08:52.821 Arbitration 00:08:52.821 =========== 00:08:52.821 Arbitration Burst: no limit 00:08:52.821 00:08:52.821 Power Management 00:08:52.821 ================ 00:08:52.821 Number of Power States: 1 00:08:52.821 Current Power State: Power State #0 00:08:52.821 Power State #0: 00:08:52.821 Max Power: 25.00 W 00:08:52.821 Non-Operational State: Operational 00:08:52.821 Entry Latency: 16 microseconds 00:08:52.821 Exit Latency: 4 microseconds 00:08:52.821 Relative Read Throughput: 0 00:08:52.821 Relative Read Latency: 0 00:08:52.821 Relative Write Throughput: 0 00:08:52.821 Relative Write Latency: 0 00:08:52.821 Idle Power: Not Reported 00:08:52.821 Active Power: Not Reported 00:08:52.821 Non-Operational Permissive Mode: Not Supported 00:08:52.821 00:08:52.821 Health Information 00:08:52.821 ================== 00:08:52.821 Critical Warnings: 00:08:52.821 Available Spare Space: OK 00:08:52.821 Temperature: OK 00:08:52.821 Device Reliability: OK 00:08:52.821 Read Only: No 00:08:52.821 Volatile Memory Backup: OK 00:08:52.821 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.821 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.821 Available Spare: 0% 00:08:52.821 Available Spare Threshold: 0% 00:08:52.821 Life Percentage Used: [2024-12-09 14:41:30.827365] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64221 terminated unexpected 00:08:52.821 [2024-12-09 14:41:30.828286] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64221 terminated unexpected 00:08:52.821 [2024-12-09 14:41:30.828972] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64221 terminated unexpected 00:08:52.821 [2024-12-09 14:41:30.830052] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64221 terminated unexpected 00:08:52.821 0% 00:08:52.821 Data Units Read: 1151 00:08:52.821 Data Units Written: 1080 00:08:52.821 Host Read Commands: 44773 00:08:52.821 Host Write Commands: 44196 00:08:52.821 Controller Busy Time: 0 minutes 00:08:52.821 Power Cycles: 0 00:08:52.821 Power On Hours: 0 hours 00:08:52.821 Unsafe Shutdowns: 0 00:08:52.821 Unrecoverable Media Errors: 0 00:08:52.821 Lifetime Error Log Entries: 0 00:08:52.821 Warning Temperature Time: 0 minutes 00:08:52.821 Critical Temperature Time: 0 minutes 00:08:52.821 00:08:52.821 Number of Queues 00:08:52.821 ================ 00:08:52.821 Number of I/O Submission Queues: 64 00:08:52.821 Number of I/O Completion Queues: 64 00:08:52.821 00:08:52.821 ZNS Specific Controller Data 00:08:52.821 ============================ 00:08:52.821 Zone Append Size Limit: 0 00:08:52.821 00:08:52.821 00:08:52.821 Active Namespaces 00:08:52.821 ================= 00:08:52.821 Namespace ID:1 00:08:52.821 Error Recovery Timeout: Unlimited 00:08:52.821 Command Set Identifier: NVM (00h) 00:08:52.821 Deallocate: Supported 00:08:52.821 Deallocated/Unwritten Error: Supported 00:08:52.821 Deallocated Read Value: All 0x00 00:08:52.821 Deallocate in Write Zeroes: Not Supported 00:08:52.821 Deallocated Guard Field: 0xFFFF 00:08:52.821 Flush: Supported 00:08:52.821 Reservation: Not Supported 00:08:52.821 Namespace Sharing Capabilities: Multiple Controllers 00:08:52.821 Size (in LBAs): 262144 (1GiB) 00:08:52.821 Capacity (in LBAs): 262144 (1GiB) 00:08:52.821 Utilization (in LBAs): 262144 (1GiB) 00:08:52.821 Thin Provisioning: Not Supported 00:08:52.821 Per-NS Atomic Units: No 00:08:52.821 Maximum Single Source Range Length: 128 00:08:52.821 Maximum Copy Length: 128 00:08:52.821 Maximum Source Range Count: 128 00:08:52.821 NGUID/EUI64 Never Reused: No 00:08:52.821 Namespace Write Protected: No 00:08:52.821 Endurance group ID: 1 00:08:52.821 Number of LBA Formats: 8 00:08:52.821 Current LBA Format: LBA Format #04 00:08:52.821 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.821 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.821 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.821 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.821 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.821 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.821 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.821 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.821 00:08:52.821 Get Feature FDP: 00:08:52.821 ================ 00:08:52.821 Enabled: Yes 00:08:52.821 FDP configuration index: 0 00:08:52.821 00:08:52.821 FDP configurations log page 00:08:52.821 =========================== 00:08:52.821 Number of FDP configurations: 1 00:08:52.821 Version: 0 00:08:52.821 Size: 112 00:08:52.821 FDP Configuration Descriptor: 0 00:08:52.821 Descriptor Size: 96 00:08:52.821 Reclaim Group Identifier format: 2 00:08:52.821 FDP Volatile Write Cache: Not Present 00:08:52.821 FDP Configuration: Valid 00:08:52.821 Vendor Specific Size: 0 00:08:52.821 Number of Reclaim Groups: 2 00:08:52.821 Number of Recalim Unit Handles: 8 00:08:52.821 Max Placement Identifiers: 128 00:08:52.821 Number of Namespaces Suppprted: 256 00:08:52.821 Reclaim unit Nominal Size: 6000000 bytes 00:08:52.821 Estimated Reclaim Unit Time Limit: Not Reported 00:08:52.821 RUH Desc #000: RUH Type: Initially Isolated 00:08:52.821 RUH Desc #001: RUH Type: Initially Isolated 00:08:52.821 RUH Desc #002: RUH Type: Initially Isolated 00:08:52.821 RUH Desc #003: RUH Type: Initially Isolated 00:08:52.821 RUH Desc #004: RUH Type: Initially Isolated 00:08:52.821 RUH Desc #005: RUH Type: Initially Isolated 00:08:52.821 RUH Desc #006: RUH Type: Initially Isolated 00:08:52.821 RUH Desc #007: RUH Type: Initially Isolated 00:08:52.821 00:08:52.821 FDP reclaim unit handle usage log page 00:08:52.821 ====================================== 00:08:52.821 Number of Reclaim Unit Handles: 8 00:08:52.821 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:52.821 RUH Usage Desc #001: RUH Attributes: Unused 00:08:52.821 RUH Usage Desc #002: RUH Attributes: Unused 00:08:52.821 RUH Usage Desc #003: RUH Attributes: Unused 00:08:52.821 RUH Usage Desc #004: RUH Attributes: Unused 00:08:52.821 RUH Usage Desc #005: RUH Attributes: Unused 00:08:52.821 RUH Usage Desc #006: RUH Attributes: Unused 00:08:52.821 RUH Usage Desc #007: RUH Attributes: Unused 00:08:52.821 00:08:52.821 FDP statistics log page 00:08:52.821 ======================= 00:08:52.821 Host bytes with metadata written: 661233664 00:08:52.821 Media bytes with metadata written: 661315584 00:08:52.821 Media bytes erased: 0 00:08:52.821 00:08:52.821 FDP events log page 00:08:52.821 =================== 00:08:52.821 Number of FDP events: 0 00:08:52.821 00:08:52.821 NVM Specific Namespace Data 00:08:52.821 =========================== 00:08:52.821 Logical Block Storage Tag Mask: 0 00:08:52.821 Protection Information Capabilities: 00:08:52.821 16b Guard Protection Information Storage Tag Support: No 00:08:52.821 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.821 Storage Tag Check Read Support: No 00:08:52.821 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.821 ===================================================== 00:08:52.821 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.821 ===================================================== 00:08:52.821 Controller Capabilities/Features 00:08:52.821 ================================ 00:08:52.821 Vendor ID: 1b36 00:08:52.821 Subsystem Vendor ID: 1af4 00:08:52.821 Serial Number: 12342 00:08:52.821 Model Number: QEMU NVMe Ctrl 00:08:52.821 Firmware Version: 8.0.0 00:08:52.821 Recommended Arb Burst: 6 00:08:52.821 IEEE OUI Identifier: 00 54 52 00:08:52.821 Multi-path I/O 00:08:52.821 May have multiple subsystem ports: No 00:08:52.821 May have multiple controllers: No 00:08:52.821 Associated with SR-IOV VF: No 00:08:52.822 Max Data Transfer Size: 524288 00:08:52.822 Max Number of Namespaces: 256 00:08:52.822 Max Number of I/O Queues: 64 00:08:52.822 NVMe Specification Version (VS): 1.4 00:08:52.822 NVMe Specification Version (Identify): 1.4 00:08:52.822 Maximum Queue Entries: 2048 00:08:52.822 Contiguous Queues Required: Yes 00:08:52.822 Arbitration Mechanisms Supported 00:08:52.822 Weighted Round Robin: Not Supported 00:08:52.822 Vendor Specific: Not Supported 00:08:52.822 Reset Timeout: 7500 ms 00:08:52.822 Doorbell Stride: 4 bytes 00:08:52.822 NVM Subsystem Reset: Not Supported 00:08:52.822 Command Sets Supported 00:08:52.822 NVM Command Set: Supported 00:08:52.822 Boot Partition: Not Supported 00:08:52.822 Memory Page Size Minimum: 4096 bytes 00:08:52.822 Memory Page Size Maximum: 65536 bytes 00:08:52.822 Persistent Memory Region: Not Supported 00:08:52.822 Optional Asynchronous Events Supported 00:08:52.822 Namespace Attribute Notices: Supported 00:08:52.822 Firmware Activation Notices: Not Supported 00:08:52.822 ANA Change Notices: Not Supported 00:08:52.822 PLE Aggregate Log Change Notices: Not Supported 00:08:52.822 LBA Status Info Alert Notices: Not Supported 00:08:52.822 EGE Aggregate Log Change Notices: Not Supported 00:08:52.822 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.822 Zone Descriptor Change Notices: Not Supported 00:08:52.822 Discovery Log Change Notices: Not Supported 00:08:52.822 Controller Attributes 00:08:52.822 128-bit Host Identifier: Not Supported 00:08:52.822 Non-Operational Permissive Mode: Not Supported 00:08:52.822 NVM Sets: Not Supported 00:08:52.822 Read Recovery Levels: Not Supported 00:08:52.822 Endurance Groups: Not Supported 00:08:52.822 Predictable Latency Mode: Not Supported 00:08:52.822 Traffic Based Keep ALive: Not Supported 00:08:52.822 Namespace Granularity: Not Supported 00:08:52.822 SQ Associations: Not Supported 00:08:52.822 UUID List: Not Supported 00:08:52.822 Multi-Domain Subsystem: Not Supported 00:08:52.822 Fixed Capacity Management: Not Supported 00:08:52.822 Variable Capacity Management: Not Supported 00:08:52.822 Delete Endurance Group: Not Supported 00:08:52.822 Delete NVM Set: Not Supported 00:08:52.822 Extended LBA Formats Supported: Supported 00:08:52.822 Flexible Data Placement Supported: Not Supported 00:08:52.822 00:08:52.822 Controller Memory Buffer Support 00:08:52.822 ================================ 00:08:52.822 Supported: No 00:08:52.822 00:08:52.822 Persistent Memory Region Support 00:08:52.822 ================================ 00:08:52.822 Supported: No 00:08:52.822 00:08:52.822 Admin Command Set Attributes 00:08:52.822 ============================ 00:08:52.822 Security Send/Receive: Not Supported 00:08:52.822 Format NVM: Supported 00:08:52.822 Firmware Activate/Download: Not Supported 00:08:52.822 Namespace Management: Supported 00:08:52.822 Device Self-Test: Not Supported 00:08:52.822 Directives: Supported 00:08:52.822 NVMe-MI: Not Supported 00:08:52.822 Virtualization Management: Not Supported 00:08:52.822 Doorbell Buffer Config: Supported 00:08:52.822 Get LBA Status Capability: Not Supported 00:08:52.822 Command & Feature Lockdown Capability: Not Supported 00:08:52.822 Abort Command Limit: 4 00:08:52.822 Async Event Request Limit: 4 00:08:52.822 Number of Firmware Slots: N/A 00:08:52.822 Firmware Slot 1 Read-Only: N/A 00:08:52.822 Firmware Activation Without Reset: N/A 00:08:52.822 Multiple Update Detection Support: N/A 00:08:52.822 Firmware Update Granularity: No Information Provided 00:08:52.822 Per-Namespace SMART Log: Yes 00:08:52.822 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.822 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:52.822 Command Effects Log Page: Supported 00:08:52.822 Get Log Page Extended Data: Supported 00:08:52.822 Telemetry Log Pages: Not Supported 00:08:52.822 Persistent Event Log Pages: Not Supported 00:08:52.822 Supported Log Pages Log Page: May Support 00:08:52.822 Commands Supported & Effects Log Page: Not Supported 00:08:52.822 Feature Identifiers & Effects Log Page:May Support 00:08:52.822 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.822 Data Area 4 for Telemetry Log: Not Supported 00:08:52.822 Error Log Page Entries Supported: 1 00:08:52.822 Keep Alive: Not Supported 00:08:52.822 00:08:52.822 NVM Command Set Attributes 00:08:52.822 ========================== 00:08:52.822 Submission Queue Entry Size 00:08:52.822 Max: 64 00:08:52.822 Min: 64 00:08:52.822 Completion Queue Entry Size 00:08:52.822 Max: 16 00:08:52.822 Min: 16 00:08:52.822 Number of Namespaces: 256 00:08:52.822 Compare Command: Supported 00:08:52.822 Write Uncorrectable Command: Not Supported 00:08:52.822 Dataset Management Command: Supported 00:08:52.822 Write Zeroes Command: Supported 00:08:52.822 Set Features Save Field: Supported 00:08:52.822 Reservations: Not Supported 00:08:52.822 Timestamp: Supported 00:08:52.822 Copy: Supported 00:08:52.822 Volatile Write Cache: Present 00:08:52.822 Atomic Write Unit (Normal): 1 00:08:52.822 Atomic Write Unit (PFail): 1 00:08:52.822 Atomic Compare & Write Unit: 1 00:08:52.822 Fused Compare & Write: Not Supported 00:08:52.822 Scatter-Gather List 00:08:52.822 SGL Command Set: Supported 00:08:52.822 SGL Keyed: Not Supported 00:08:52.822 SGL Bit Bucket Descriptor: Not Supported 00:08:52.822 SGL Metadata Pointer: Not Supported 00:08:52.822 Oversized SGL: Not Supported 00:08:52.822 SGL Metadata Address: Not Supported 00:08:52.822 SGL Offset: Not Supported 00:08:52.822 Transport SGL Data Block: Not Supported 00:08:52.822 Replay Protected Memory Block: Not Supported 00:08:52.822 00:08:52.822 Firmware Slot Information 00:08:52.822 ========================= 00:08:52.822 Active slot: 1 00:08:52.822 Slot 1 Firmware Revision: 1.0 00:08:52.822 00:08:52.822 00:08:52.822 Commands Supported and Effects 00:08:52.822 ============================== 00:08:52.822 Admin Commands 00:08:52.822 -------------- 00:08:52.822 Delete I/O Submission Queue (00h): Supported 00:08:52.822 Create I/O Submission Queue (01h): Supported 00:08:52.822 Get Log Page (02h): Supported 00:08:52.822 Delete I/O Completion Queue (04h): Supported 00:08:52.822 Create I/O Completion Queue (05h): Supported 00:08:52.822 Identify (06h): Supported 00:08:52.822 Abort (08h): Supported 00:08:52.822 Set Features (09h): Supported 00:08:52.822 Get Features (0Ah): Supported 00:08:52.822 Asynchronous Event Request (0Ch): Supported 00:08:52.822 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.822 Directive Send (19h): Supported 00:08:52.822 Directive Receive (1Ah): Supported 00:08:52.822 Virtualization Management (1Ch): Supported 00:08:52.822 Doorbell Buffer Config (7Ch): Supported 00:08:52.822 Format NVM (80h): Supported LBA-Change 00:08:52.822 I/O Commands 00:08:52.822 ------------ 00:08:52.822 Flush (00h): Supported LBA-Change 00:08:52.822 Write (01h): Supported LBA-Change 00:08:52.822 Read (02h): Supported 00:08:52.822 Compare (05h): Supported 00:08:52.822 Write Zeroes (08h): Supported LBA-Change 00:08:52.822 Dataset Management (09h): Supported LBA-Change 00:08:52.822 Unknown (0Ch): Supported 00:08:52.822 Unknown (12h): Supported 00:08:52.822 Copy (19h): Supported LBA-Change 00:08:52.822 Unknown (1Dh): Supported LBA-Change 00:08:52.822 00:08:52.822 Error Log 00:08:52.822 ========= 00:08:52.822 00:08:52.822 Arbitration 00:08:52.822 =========== 00:08:52.822 Arbitration Burst: no limit 00:08:52.822 00:08:52.822 Power Management 00:08:52.822 ================ 00:08:52.822 Number of Power States: 1 00:08:52.822 Current Power State: Power State #0 00:08:52.822 Power State #0: 00:08:52.822 Max Power: 25.00 W 00:08:52.822 Non-Operational State: Operational 00:08:52.822 Entry Latency: 16 microseconds 00:08:52.823 Exit Latency: 4 microseconds 00:08:52.823 Relative Read Throughput: 0 00:08:52.823 Relative Read Latency: 0 00:08:52.823 Relative Write Throughput: 0 00:08:52.823 Relative Write Latency: 0 00:08:52.823 Idle Power: Not Reported 00:08:52.823 Active Power: Not Reported 00:08:52.823 Non-Operational Permissive Mode: Not Supported 00:08:52.823 00:08:52.823 Health Information 00:08:52.823 ================== 00:08:52.823 Critical Warnings: 00:08:52.823 Available Spare Space: OK 00:08:52.823 Temperature: OK 00:08:52.823 Device Reliability: OK 00:08:52.823 Read Only: No 00:08:52.823 Volatile Memory Backup: OK 00:08:52.823 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.823 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.823 Available Spare: 0% 00:08:52.823 Available Spare Threshold: 0% 00:08:52.823 Life Percentage Used: 0% 00:08:52.823 Data Units Read: 2444 00:08:52.823 Data Units Written: 2231 00:08:52.823 Host Read Commands: 126190 00:08:52.823 Host Write Commands: 124459 00:08:52.823 Controller Busy Time: 0 minutes 00:08:52.823 Power Cycles: 0 00:08:52.823 Power On Hours: 0 hours 00:08:52.823 Unsafe Shutdowns: 0 00:08:52.823 Unrecoverable Media Errors: 0 00:08:52.823 Lifetime Error Log Entries: 0 00:08:52.823 Warning Temperature Time: 0 minutes 00:08:52.823 Critical Temperature Time: 0 minutes 00:08:52.823 00:08:52.823 Number of Queues 00:08:52.823 ================ 00:08:52.823 Number of I/O Submission Queues: 64 00:08:52.823 Number of I/O Completion Queues: 64 00:08:52.823 00:08:52.823 ZNS Specific Controller Data 00:08:52.823 ============================ 00:08:52.823 Zone Append Size Limit: 0 00:08:52.823 00:08:52.823 00:08:52.823 Active Namespaces 00:08:52.823 ================= 00:08:52.823 Namespace ID:1 00:08:52.823 Error Recovery Timeout: Unlimited 00:08:52.823 Command Set Identifier: NVM (00h) 00:08:52.823 Deallocate: Supported 00:08:52.823 Deallocated/Unwritten Error: Supported 00:08:52.823 Deallocated Read Value: All 0x00 00:08:52.823 Deallocate in Write Zeroes: Not Supported 00:08:52.823 Deallocated Guard Field: 0xFFFF 00:08:52.823 Flush: Supported 00:08:52.823 Reservation: Not Supported 00:08:52.823 Namespace Sharing Capabilities: Private 00:08:52.823 Size (in LBAs): 1048576 (4GiB) 00:08:52.823 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.823 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.823 Thin Provisioning: Not Supported 00:08:52.823 Per-NS Atomic Units: No 00:08:52.823 Maximum Single Source Range Length: 128 00:08:52.823 Maximum Copy Length: 128 00:08:52.823 Maximum Source Range Count: 128 00:08:52.823 NGUID/EUI64 Never Reused: No 00:08:52.823 Namespace Write Protected: No 00:08:52.823 Number of LBA Formats: 8 00:08:52.823 Current LBA Format: LBA Format #04 00:08:52.823 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.823 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.823 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.823 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.823 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.823 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.823 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.823 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.823 00:08:52.823 NVM Specific Namespace Data 00:08:52.823 =========================== 00:08:52.823 Logical Block Storage Tag Mask: 0 00:08:52.823 Protection Information Capabilities: 00:08:52.823 16b Guard Protection Information Storage Tag Support: No 00:08:52.823 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.823 Storage Tag Check Read Support: No 00:08:52.823 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Namespace ID:2 00:08:52.823 Error Recovery Timeout: Unlimited 00:08:52.823 Command Set Identifier: NVM (00h) 00:08:52.823 Deallocate: Supported 00:08:52.823 Deallocated/Unwritten Error: Supported 00:08:52.823 Deallocated Read Value: All 0x00 00:08:52.823 Deallocate in Write Zeroes: Not Supported 00:08:52.823 Deallocated Guard Field: 0xFFFF 00:08:52.823 Flush: Supported 00:08:52.823 Reservation: Not Supported 00:08:52.823 Namespace Sharing Capabilities: Private 00:08:52.823 Size (in LBAs): 1048576 (4GiB) 00:08:52.823 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.823 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.823 Thin Provisioning: Not Supported 00:08:52.823 Per-NS Atomic Units: No 00:08:52.823 Maximum Single Source Range Length: 128 00:08:52.823 Maximum Copy Length: 128 00:08:52.823 Maximum Source Range Count: 128 00:08:52.823 NGUID/EUI64 Never Reused: No 00:08:52.823 Namespace Write Protected: No 00:08:52.823 Number of LBA Formats: 8 00:08:52.823 Current LBA Format: LBA Format #04 00:08:52.823 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.823 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.823 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.823 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.823 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.823 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.823 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.823 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.823 00:08:52.823 NVM Specific Namespace Data 00:08:52.823 =========================== 00:08:52.823 Logical Block Storage Tag Mask: 0 00:08:52.823 Protection Information Capabilities: 00:08:52.823 16b Guard Protection Information Storage Tag Support: No 00:08:52.823 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.823 Storage Tag Check Read Support: No 00:08:52.823 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Namespace ID:3 00:08:52.823 Error Recovery Timeout: Unlimited 00:08:52.823 Command Set Identifier: NVM (00h) 00:08:52.823 Deallocate: Supported 00:08:52.823 Deallocated/Unwritten Error: Supported 00:08:52.823 Deallocated Read Value: All 0x00 00:08:52.823 Deallocate in Write Zeroes: Not Supported 00:08:52.823 Deallocated Guard Field: 0xFFFF 00:08:52.823 Flush: Supported 00:08:52.823 Reservation: Not Supported 00:08:52.823 Namespace Sharing Capabilities: Private 00:08:52.823 Size (in LBAs): 1048576 (4GiB) 00:08:52.823 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.823 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.823 Thin Provisioning: Not Supported 00:08:52.823 Per-NS Atomic Units: No 00:08:52.823 Maximum Single Source Range Length: 128 00:08:52.823 Maximum Copy Length: 128 00:08:52.823 Maximum Source Range Count: 128 00:08:52.823 NGUID/EUI64 Never Reused: No 00:08:52.823 Namespace Write Protected: No 00:08:52.823 Number of LBA Formats: 8 00:08:52.823 Current LBA Format: LBA Format #04 00:08:52.823 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.823 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.823 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.823 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.823 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.823 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.823 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.823 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.823 00:08:52.823 NVM Specific Namespace Data 00:08:52.823 =========================== 00:08:52.823 Logical Block Storage Tag Mask: 0 00:08:52.823 Protection Information Capabilities: 00:08:52.823 16b Guard Protection Information Storage Tag Support: No 00:08:52.823 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.823 Storage Tag Check Read Support: No 00:08:52.823 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.823 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.824 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.824 14:41:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:52.824 14:41:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:53.082 ===================================================== 00:08:53.082 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.082 ===================================================== 00:08:53.082 Controller Capabilities/Features 00:08:53.082 ================================ 00:08:53.082 Vendor ID: 1b36 00:08:53.082 Subsystem Vendor ID: 1af4 00:08:53.082 Serial Number: 12340 00:08:53.082 Model Number: QEMU NVMe Ctrl 00:08:53.082 Firmware Version: 8.0.0 00:08:53.082 Recommended Arb Burst: 6 00:08:53.082 IEEE OUI Identifier: 00 54 52 00:08:53.082 Multi-path I/O 00:08:53.082 May have multiple subsystem ports: No 00:08:53.082 May have multiple controllers: No 00:08:53.082 Associated with SR-IOV VF: No 00:08:53.082 Max Data Transfer Size: 524288 00:08:53.082 Max Number of Namespaces: 256 00:08:53.082 Max Number of I/O Queues: 64 00:08:53.082 NVMe Specification Version (VS): 1.4 00:08:53.082 NVMe Specification Version (Identify): 1.4 00:08:53.082 Maximum Queue Entries: 2048 00:08:53.082 Contiguous Queues Required: Yes 00:08:53.082 Arbitration Mechanisms Supported 00:08:53.082 Weighted Round Robin: Not Supported 00:08:53.082 Vendor Specific: Not Supported 00:08:53.082 Reset Timeout: 7500 ms 00:08:53.082 Doorbell Stride: 4 bytes 00:08:53.082 NVM Subsystem Reset: Not Supported 00:08:53.082 Command Sets Supported 00:08:53.082 NVM Command Set: Supported 00:08:53.082 Boot Partition: Not Supported 00:08:53.082 Memory Page Size Minimum: 4096 bytes 00:08:53.082 Memory Page Size Maximum: 65536 bytes 00:08:53.082 Persistent Memory Region: Not Supported 00:08:53.082 Optional Asynchronous Events Supported 00:08:53.082 Namespace Attribute Notices: Supported 00:08:53.082 Firmware Activation Notices: Not Supported 00:08:53.082 ANA Change Notices: Not Supported 00:08:53.082 PLE Aggregate Log Change Notices: Not Supported 00:08:53.082 LBA Status Info Alert Notices: Not Supported 00:08:53.082 EGE Aggregate Log Change Notices: Not Supported 00:08:53.082 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.082 Zone Descriptor Change Notices: Not Supported 00:08:53.082 Discovery Log Change Notices: Not Supported 00:08:53.082 Controller Attributes 00:08:53.082 128-bit Host Identifier: Not Supported 00:08:53.082 Non-Operational Permissive Mode: Not Supported 00:08:53.082 NVM Sets: Not Supported 00:08:53.082 Read Recovery Levels: Not Supported 00:08:53.082 Endurance Groups: Not Supported 00:08:53.082 Predictable Latency Mode: Not Supported 00:08:53.082 Traffic Based Keep ALive: Not Supported 00:08:53.082 Namespace Granularity: Not Supported 00:08:53.082 SQ Associations: Not Supported 00:08:53.082 UUID List: Not Supported 00:08:53.082 Multi-Domain Subsystem: Not Supported 00:08:53.082 Fixed Capacity Management: Not Supported 00:08:53.082 Variable Capacity Management: Not Supported 00:08:53.082 Delete Endurance Group: Not Supported 00:08:53.082 Delete NVM Set: Not Supported 00:08:53.082 Extended LBA Formats Supported: Supported 00:08:53.082 Flexible Data Placement Supported: Not Supported 00:08:53.082 00:08:53.082 Controller Memory Buffer Support 00:08:53.082 ================================ 00:08:53.082 Supported: No 00:08:53.082 00:08:53.082 Persistent Memory Region Support 00:08:53.082 ================================ 00:08:53.082 Supported: No 00:08:53.082 00:08:53.082 Admin Command Set Attributes 00:08:53.082 ============================ 00:08:53.082 Security Send/Receive: Not Supported 00:08:53.082 Format NVM: Supported 00:08:53.082 Firmware Activate/Download: Not Supported 00:08:53.082 Namespace Management: Supported 00:08:53.082 Device Self-Test: Not Supported 00:08:53.082 Directives: Supported 00:08:53.082 NVMe-MI: Not Supported 00:08:53.082 Virtualization Management: Not Supported 00:08:53.082 Doorbell Buffer Config: Supported 00:08:53.082 Get LBA Status Capability: Not Supported 00:08:53.082 Command & Feature Lockdown Capability: Not Supported 00:08:53.082 Abort Command Limit: 4 00:08:53.082 Async Event Request Limit: 4 00:08:53.082 Number of Firmware Slots: N/A 00:08:53.082 Firmware Slot 1 Read-Only: N/A 00:08:53.082 Firmware Activation Without Reset: N/A 00:08:53.082 Multiple Update Detection Support: N/A 00:08:53.082 Firmware Update Granularity: No Information Provided 00:08:53.082 Per-Namespace SMART Log: Yes 00:08:53.082 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.082 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:53.082 Command Effects Log Page: Supported 00:08:53.082 Get Log Page Extended Data: Supported 00:08:53.082 Telemetry Log Pages: Not Supported 00:08:53.082 Persistent Event Log Pages: Not Supported 00:08:53.082 Supported Log Pages Log Page: May Support 00:08:53.082 Commands Supported & Effects Log Page: Not Supported 00:08:53.082 Feature Identifiers & Effects Log Page:May Support 00:08:53.082 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.082 Data Area 4 for Telemetry Log: Not Supported 00:08:53.082 Error Log Page Entries Supported: 1 00:08:53.082 Keep Alive: Not Supported 00:08:53.082 00:08:53.082 NVM Command Set Attributes 00:08:53.082 ========================== 00:08:53.082 Submission Queue Entry Size 00:08:53.082 Max: 64 00:08:53.082 Min: 64 00:08:53.082 Completion Queue Entry Size 00:08:53.082 Max: 16 00:08:53.082 Min: 16 00:08:53.082 Number of Namespaces: 256 00:08:53.082 Compare Command: Supported 00:08:53.082 Write Uncorrectable Command: Not Supported 00:08:53.083 Dataset Management Command: Supported 00:08:53.083 Write Zeroes Command: Supported 00:08:53.083 Set Features Save Field: Supported 00:08:53.083 Reservations: Not Supported 00:08:53.083 Timestamp: Supported 00:08:53.083 Copy: Supported 00:08:53.083 Volatile Write Cache: Present 00:08:53.083 Atomic Write Unit (Normal): 1 00:08:53.083 Atomic Write Unit (PFail): 1 00:08:53.083 Atomic Compare & Write Unit: 1 00:08:53.083 Fused Compare & Write: Not Supported 00:08:53.083 Scatter-Gather List 00:08:53.083 SGL Command Set: Supported 00:08:53.083 SGL Keyed: Not Supported 00:08:53.083 SGL Bit Bucket Descriptor: Not Supported 00:08:53.083 SGL Metadata Pointer: Not Supported 00:08:53.083 Oversized SGL: Not Supported 00:08:53.083 SGL Metadata Address: Not Supported 00:08:53.083 SGL Offset: Not Supported 00:08:53.083 Transport SGL Data Block: Not Supported 00:08:53.083 Replay Protected Memory Block: Not Supported 00:08:53.083 00:08:53.083 Firmware Slot Information 00:08:53.083 ========================= 00:08:53.083 Active slot: 1 00:08:53.083 Slot 1 Firmware Revision: 1.0 00:08:53.083 00:08:53.083 00:08:53.083 Commands Supported and Effects 00:08:53.083 ============================== 00:08:53.083 Admin Commands 00:08:53.083 -------------- 00:08:53.083 Delete I/O Submission Queue (00h): Supported 00:08:53.083 Create I/O Submission Queue (01h): Supported 00:08:53.083 Get Log Page (02h): Supported 00:08:53.083 Delete I/O Completion Queue (04h): Supported 00:08:53.083 Create I/O Completion Queue (05h): Supported 00:08:53.083 Identify (06h): Supported 00:08:53.083 Abort (08h): Supported 00:08:53.083 Set Features (09h): Supported 00:08:53.083 Get Features (0Ah): Supported 00:08:53.083 Asynchronous Event Request (0Ch): Supported 00:08:53.083 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.083 Directive Send (19h): Supported 00:08:53.083 Directive Receive (1Ah): Supported 00:08:53.083 Virtualization Management (1Ch): Supported 00:08:53.083 Doorbell Buffer Config (7Ch): Supported 00:08:53.083 Format NVM (80h): Supported LBA-Change 00:08:53.083 I/O Commands 00:08:53.083 ------------ 00:08:53.083 Flush (00h): Supported LBA-Change 00:08:53.083 Write (01h): Supported LBA-Change 00:08:53.083 Read (02h): Supported 00:08:53.083 Compare (05h): Supported 00:08:53.083 Write Zeroes (08h): Supported LBA-Change 00:08:53.083 Dataset Management (09h): Supported LBA-Change 00:08:53.083 Unknown (0Ch): Supported 00:08:53.083 Unknown (12h): Supported 00:08:53.083 Copy (19h): Supported LBA-Change 00:08:53.083 Unknown (1Dh): Supported LBA-Change 00:08:53.083 00:08:53.083 Error Log 00:08:53.083 ========= 00:08:53.083 00:08:53.083 Arbitration 00:08:53.083 =========== 00:08:53.083 Arbitration Burst: no limit 00:08:53.083 00:08:53.083 Power Management 00:08:53.083 ================ 00:08:53.083 Number of Power States: 1 00:08:53.083 Current Power State: Power State #0 00:08:53.083 Power State #0: 00:08:53.083 Max Power: 25.00 W 00:08:53.083 Non-Operational State: Operational 00:08:53.083 Entry Latency: 16 microseconds 00:08:53.083 Exit Latency: 4 microseconds 00:08:53.083 Relative Read Throughput: 0 00:08:53.083 Relative Read Latency: 0 00:08:53.083 Relative Write Throughput: 0 00:08:53.083 Relative Write Latency: 0 00:08:53.083 Idle Power: Not Reported 00:08:53.083 Active Power: Not Reported 00:08:53.083 Non-Operational Permissive Mode: Not Supported 00:08:53.083 00:08:53.083 Health Information 00:08:53.083 ================== 00:08:53.083 Critical Warnings: 00:08:53.083 Available Spare Space: OK 00:08:53.083 Temperature: OK 00:08:53.083 Device Reliability: OK 00:08:53.083 Read Only: No 00:08:53.083 Volatile Memory Backup: OK 00:08:53.083 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.083 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.083 Available Spare: 0% 00:08:53.083 Available Spare Threshold: 0% 00:08:53.083 Life Percentage Used: 0% 00:08:53.083 Data Units Read: 704 00:08:53.083 Data Units Written: 632 00:08:53.083 Host Read Commands: 40922 00:08:53.083 Host Write Commands: 40708 00:08:53.083 Controller Busy Time: 0 minutes 00:08:53.083 Power Cycles: 0 00:08:53.083 Power On Hours: 0 hours 00:08:53.083 Unsafe Shutdowns: 0 00:08:53.083 Unrecoverable Media Errors: 0 00:08:53.083 Lifetime Error Log Entries: 0 00:08:53.083 Warning Temperature Time: 0 minutes 00:08:53.083 Critical Temperature Time: 0 minutes 00:08:53.083 00:08:53.083 Number of Queues 00:08:53.083 ================ 00:08:53.083 Number of I/O Submission Queues: 64 00:08:53.083 Number of I/O Completion Queues: 64 00:08:53.083 00:08:53.083 ZNS Specific Controller Data 00:08:53.083 ============================ 00:08:53.083 Zone Append Size Limit: 0 00:08:53.083 00:08:53.083 00:08:53.083 Active Namespaces 00:08:53.083 ================= 00:08:53.083 Namespace ID:1 00:08:53.083 Error Recovery Timeout: Unlimited 00:08:53.083 Command Set Identifier: NVM (00h) 00:08:53.083 Deallocate: Supported 00:08:53.083 Deallocated/Unwritten Error: Supported 00:08:53.083 Deallocated Read Value: All 0x00 00:08:53.083 Deallocate in Write Zeroes: Not Supported 00:08:53.083 Deallocated Guard Field: 0xFFFF 00:08:53.083 Flush: Supported 00:08:53.083 Reservation: Not Supported 00:08:53.083 Metadata Transferred as: Separate Metadata Buffer 00:08:53.083 Namespace Sharing Capabilities: Private 00:08:53.083 Size (in LBAs): 1548666 (5GiB) 00:08:53.083 Capacity (in LBAs): 1548666 (5GiB) 00:08:53.083 Utilization (in LBAs): 1548666 (5GiB) 00:08:53.083 Thin Provisioning: Not Supported 00:08:53.083 Per-NS Atomic Units: No 00:08:53.083 Maximum Single Source Range Length: 128 00:08:53.083 Maximum Copy Length: 128 00:08:53.083 Maximum Source Range Count: 128 00:08:53.083 NGUID/EUI64 Never Reused: No 00:08:53.083 Namespace Write Protected: No 00:08:53.083 Number of LBA Formats: 8 00:08:53.083 Current LBA Format: LBA Format #07 00:08:53.083 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.083 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.083 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.083 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.083 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.083 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.083 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.083 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.083 00:08:53.083 NVM Specific Namespace Data 00:08:53.083 =========================== 00:08:53.083 Logical Block Storage Tag Mask: 0 00:08:53.083 Protection Information Capabilities: 00:08:53.083 16b Guard Protection Information Storage Tag Support: No 00:08:53.083 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.083 Storage Tag Check Read Support: No 00:08:53.083 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.083 14:41:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.083 14:41:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:53.400 ===================================================== 00:08:53.401 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.401 ===================================================== 00:08:53.401 Controller Capabilities/Features 00:08:53.401 ================================ 00:08:53.401 Vendor ID: 1b36 00:08:53.401 Subsystem Vendor ID: 1af4 00:08:53.401 Serial Number: 12341 00:08:53.401 Model Number: QEMU NVMe Ctrl 00:08:53.401 Firmware Version: 8.0.0 00:08:53.401 Recommended Arb Burst: 6 00:08:53.401 IEEE OUI Identifier: 00 54 52 00:08:53.401 Multi-path I/O 00:08:53.401 May have multiple subsystem ports: No 00:08:53.401 May have multiple controllers: No 00:08:53.401 Associated with SR-IOV VF: No 00:08:53.401 Max Data Transfer Size: 524288 00:08:53.401 Max Number of Namespaces: 256 00:08:53.401 Max Number of I/O Queues: 64 00:08:53.401 NVMe Specification Version (VS): 1.4 00:08:53.401 NVMe Specification Version (Identify): 1.4 00:08:53.401 Maximum Queue Entries: 2048 00:08:53.401 Contiguous Queues Required: Yes 00:08:53.401 Arbitration Mechanisms Supported 00:08:53.401 Weighted Round Robin: Not Supported 00:08:53.401 Vendor Specific: Not Supported 00:08:53.401 Reset Timeout: 7500 ms 00:08:53.401 Doorbell Stride: 4 bytes 00:08:53.401 NVM Subsystem Reset: Not Supported 00:08:53.401 Command Sets Supported 00:08:53.401 NVM Command Set: Supported 00:08:53.401 Boot Partition: Not Supported 00:08:53.401 Memory Page Size Minimum: 4096 bytes 00:08:53.401 Memory Page Size Maximum: 65536 bytes 00:08:53.401 Persistent Memory Region: Not Supported 00:08:53.401 Optional Asynchronous Events Supported 00:08:53.401 Namespace Attribute Notices: Supported 00:08:53.401 Firmware Activation Notices: Not Supported 00:08:53.401 ANA Change Notices: Not Supported 00:08:53.401 PLE Aggregate Log Change Notices: Not Supported 00:08:53.401 LBA Status Info Alert Notices: Not Supported 00:08:53.401 EGE Aggregate Log Change Notices: Not Supported 00:08:53.401 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.401 Zone Descriptor Change Notices: Not Supported 00:08:53.401 Discovery Log Change Notices: Not Supported 00:08:53.401 Controller Attributes 00:08:53.401 128-bit Host Identifier: Not Supported 00:08:53.401 Non-Operational Permissive Mode: Not Supported 00:08:53.401 NVM Sets: Not Supported 00:08:53.401 Read Recovery Levels: Not Supported 00:08:53.401 Endurance Groups: Not Supported 00:08:53.401 Predictable Latency Mode: Not Supported 00:08:53.401 Traffic Based Keep ALive: Not Supported 00:08:53.401 Namespace Granularity: Not Supported 00:08:53.401 SQ Associations: Not Supported 00:08:53.401 UUID List: Not Supported 00:08:53.401 Multi-Domain Subsystem: Not Supported 00:08:53.401 Fixed Capacity Management: Not Supported 00:08:53.401 Variable Capacity Management: Not Supported 00:08:53.401 Delete Endurance Group: Not Supported 00:08:53.401 Delete NVM Set: Not Supported 00:08:53.401 Extended LBA Formats Supported: Supported 00:08:53.401 Flexible Data Placement Supported: Not Supported 00:08:53.401 00:08:53.401 Controller Memory Buffer Support 00:08:53.401 ================================ 00:08:53.401 Supported: No 00:08:53.401 00:08:53.401 Persistent Memory Region Support 00:08:53.401 ================================ 00:08:53.401 Supported: No 00:08:53.401 00:08:53.401 Admin Command Set Attributes 00:08:53.401 ============================ 00:08:53.401 Security Send/Receive: Not Supported 00:08:53.401 Format NVM: Supported 00:08:53.401 Firmware Activate/Download: Not Supported 00:08:53.401 Namespace Management: Supported 00:08:53.401 Device Self-Test: Not Supported 00:08:53.401 Directives: Supported 00:08:53.401 NVMe-MI: Not Supported 00:08:53.401 Virtualization Management: Not Supported 00:08:53.401 Doorbell Buffer Config: Supported 00:08:53.401 Get LBA Status Capability: Not Supported 00:08:53.401 Command & Feature Lockdown Capability: Not Supported 00:08:53.401 Abort Command Limit: 4 00:08:53.401 Async Event Request Limit: 4 00:08:53.401 Number of Firmware Slots: N/A 00:08:53.401 Firmware Slot 1 Read-Only: N/A 00:08:53.401 Firmware Activation Without Reset: N/A 00:08:53.401 Multiple Update Detection Support: N/A 00:08:53.401 Firmware Update Granularity: No Information Provided 00:08:53.401 Per-Namespace SMART Log: Yes 00:08:53.401 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.401 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:53.401 Command Effects Log Page: Supported 00:08:53.401 Get Log Page Extended Data: Supported 00:08:53.401 Telemetry Log Pages: Not Supported 00:08:53.401 Persistent Event Log Pages: Not Supported 00:08:53.401 Supported Log Pages Log Page: May Support 00:08:53.401 Commands Supported & Effects Log Page: Not Supported 00:08:53.401 Feature Identifiers & Effects Log Page:May Support 00:08:53.401 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.401 Data Area 4 for Telemetry Log: Not Supported 00:08:53.401 Error Log Page Entries Supported: 1 00:08:53.401 Keep Alive: Not Supported 00:08:53.401 00:08:53.401 NVM Command Set Attributes 00:08:53.401 ========================== 00:08:53.401 Submission Queue Entry Size 00:08:53.401 Max: 64 00:08:53.401 Min: 64 00:08:53.401 Completion Queue Entry Size 00:08:53.401 Max: 16 00:08:53.401 Min: 16 00:08:53.401 Number of Namespaces: 256 00:08:53.401 Compare Command: Supported 00:08:53.401 Write Uncorrectable Command: Not Supported 00:08:53.401 Dataset Management Command: Supported 00:08:53.401 Write Zeroes Command: Supported 00:08:53.401 Set Features Save Field: Supported 00:08:53.401 Reservations: Not Supported 00:08:53.401 Timestamp: Supported 00:08:53.401 Copy: Supported 00:08:53.401 Volatile Write Cache: Present 00:08:53.401 Atomic Write Unit (Normal): 1 00:08:53.401 Atomic Write Unit (PFail): 1 00:08:53.401 Atomic Compare & Write Unit: 1 00:08:53.401 Fused Compare & Write: Not Supported 00:08:53.401 Scatter-Gather List 00:08:53.401 SGL Command Set: Supported 00:08:53.401 SGL Keyed: Not Supported 00:08:53.401 SGL Bit Bucket Descriptor: Not Supported 00:08:53.401 SGL Metadata Pointer: Not Supported 00:08:53.401 Oversized SGL: Not Supported 00:08:53.401 SGL Metadata Address: Not Supported 00:08:53.401 SGL Offset: Not Supported 00:08:53.401 Transport SGL Data Block: Not Supported 00:08:53.401 Replay Protected Memory Block: Not Supported 00:08:53.401 00:08:53.401 Firmware Slot Information 00:08:53.401 ========================= 00:08:53.401 Active slot: 1 00:08:53.401 Slot 1 Firmware Revision: 1.0 00:08:53.401 00:08:53.401 00:08:53.401 Commands Supported and Effects 00:08:53.401 ============================== 00:08:53.401 Admin Commands 00:08:53.401 -------------- 00:08:53.401 Delete I/O Submission Queue (00h): Supported 00:08:53.401 Create I/O Submission Queue (01h): Supported 00:08:53.401 Get Log Page (02h): Supported 00:08:53.401 Delete I/O Completion Queue (04h): Supported 00:08:53.401 Create I/O Completion Queue (05h): Supported 00:08:53.401 Identify (06h): Supported 00:08:53.401 Abort (08h): Supported 00:08:53.401 Set Features (09h): Supported 00:08:53.401 Get Features (0Ah): Supported 00:08:53.401 Asynchronous Event Request (0Ch): Supported 00:08:53.401 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.401 Directive Send (19h): Supported 00:08:53.401 Directive Receive (1Ah): Supported 00:08:53.401 Virtualization Management (1Ch): Supported 00:08:53.401 Doorbell Buffer Config (7Ch): Supported 00:08:53.401 Format NVM (80h): Supported LBA-Change 00:08:53.401 I/O Commands 00:08:53.401 ------------ 00:08:53.401 Flush (00h): Supported LBA-Change 00:08:53.401 Write (01h): Supported LBA-Change 00:08:53.401 Read (02h): Supported 00:08:53.401 Compare (05h): Supported 00:08:53.401 Write Zeroes (08h): Supported LBA-Change 00:08:53.401 Dataset Management (09h): Supported LBA-Change 00:08:53.401 Unknown (0Ch): Supported 00:08:53.401 Unknown (12h): Supported 00:08:53.401 Copy (19h): Supported LBA-Change 00:08:53.401 Unknown (1Dh): Supported LBA-Change 00:08:53.401 00:08:53.401 Error Log 00:08:53.401 ========= 00:08:53.401 00:08:53.401 Arbitration 00:08:53.401 =========== 00:08:53.401 Arbitration Burst: no limit 00:08:53.401 00:08:53.401 Power Management 00:08:53.401 ================ 00:08:53.401 Number of Power States: 1 00:08:53.401 Current Power State: Power State #0 00:08:53.401 Power State #0: 00:08:53.401 Max Power: 25.00 W 00:08:53.401 Non-Operational State: Operational 00:08:53.401 Entry Latency: 16 microseconds 00:08:53.401 Exit Latency: 4 microseconds 00:08:53.401 Relative Read Throughput: 0 00:08:53.401 Relative Read Latency: 0 00:08:53.401 Relative Write Throughput: 0 00:08:53.401 Relative Write Latency: 0 00:08:53.401 Idle Power: Not Reported 00:08:53.401 Active Power: Not Reported 00:08:53.401 Non-Operational Permissive Mode: Not Supported 00:08:53.401 00:08:53.401 Health Information 00:08:53.401 ================== 00:08:53.401 Critical Warnings: 00:08:53.401 Available Spare Space: OK 00:08:53.402 Temperature: OK 00:08:53.402 Device Reliability: OK 00:08:53.402 Read Only: No 00:08:53.402 Volatile Memory Backup: OK 00:08:53.402 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.402 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.402 Available Spare: 0% 00:08:53.402 Available Spare Threshold: 0% 00:08:53.402 Life Percentage Used: 0% 00:08:53.402 Data Units Read: 1075 00:08:53.402 Data Units Written: 942 00:08:53.402 Host Read Commands: 59220 00:08:53.402 Host Write Commands: 58010 00:08:53.402 Controller Busy Time: 0 minutes 00:08:53.402 Power Cycles: 0 00:08:53.402 Power On Hours: 0 hours 00:08:53.402 Unsafe Shutdowns: 0 00:08:53.402 Unrecoverable Media Errors: 0 00:08:53.402 Lifetime Error Log Entries: 0 00:08:53.402 Warning Temperature Time: 0 minutes 00:08:53.402 Critical Temperature Time: 0 minutes 00:08:53.402 00:08:53.402 Number of Queues 00:08:53.402 ================ 00:08:53.402 Number of I/O Submission Queues: 64 00:08:53.402 Number of I/O Completion Queues: 64 00:08:53.402 00:08:53.402 ZNS Specific Controller Data 00:08:53.402 ============================ 00:08:53.402 Zone Append Size Limit: 0 00:08:53.402 00:08:53.402 00:08:53.402 Active Namespaces 00:08:53.402 ================= 00:08:53.402 Namespace ID:1 00:08:53.402 Error Recovery Timeout: Unlimited 00:08:53.402 Command Set Identifier: NVM (00h) 00:08:53.402 Deallocate: Supported 00:08:53.402 Deallocated/Unwritten Error: Supported 00:08:53.402 Deallocated Read Value: All 0x00 00:08:53.402 Deallocate in Write Zeroes: Not Supported 00:08:53.402 Deallocated Guard Field: 0xFFFF 00:08:53.402 Flush: Supported 00:08:53.402 Reservation: Not Supported 00:08:53.402 Namespace Sharing Capabilities: Private 00:08:53.402 Size (in LBAs): 1310720 (5GiB) 00:08:53.402 Capacity (in LBAs): 1310720 (5GiB) 00:08:53.402 Utilization (in LBAs): 1310720 (5GiB) 00:08:53.402 Thin Provisioning: Not Supported 00:08:53.402 Per-NS Atomic Units: No 00:08:53.402 Maximum Single Source Range Length: 128 00:08:53.402 Maximum Copy Length: 128 00:08:53.402 Maximum Source Range Count: 128 00:08:53.402 NGUID/EUI64 Never Reused: No 00:08:53.402 Namespace Write Protected: No 00:08:53.402 Number of LBA Formats: 8 00:08:53.402 Current LBA Format: LBA Format #04 00:08:53.402 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.402 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.402 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.402 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.402 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.402 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.402 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.402 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.402 00:08:53.402 NVM Specific Namespace Data 00:08:53.402 =========================== 00:08:53.402 Logical Block Storage Tag Mask: 0 00:08:53.402 Protection Information Capabilities: 00:08:53.402 16b Guard Protection Information Storage Tag Support: No 00:08:53.402 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.402 Storage Tag Check Read Support: No 00:08:53.402 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.402 14:41:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.402 14:41:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:53.661 ===================================================== 00:08:53.661 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.661 ===================================================== 00:08:53.661 Controller Capabilities/Features 00:08:53.661 ================================ 00:08:53.661 Vendor ID: 1b36 00:08:53.661 Subsystem Vendor ID: 1af4 00:08:53.661 Serial Number: 12342 00:08:53.661 Model Number: QEMU NVMe Ctrl 00:08:53.661 Firmware Version: 8.0.0 00:08:53.661 Recommended Arb Burst: 6 00:08:53.661 IEEE OUI Identifier: 00 54 52 00:08:53.661 Multi-path I/O 00:08:53.661 May have multiple subsystem ports: No 00:08:53.661 May have multiple controllers: No 00:08:53.661 Associated with SR-IOV VF: No 00:08:53.661 Max Data Transfer Size: 524288 00:08:53.661 Max Number of Namespaces: 256 00:08:53.661 Max Number of I/O Queues: 64 00:08:53.661 NVMe Specification Version (VS): 1.4 00:08:53.661 NVMe Specification Version (Identify): 1.4 00:08:53.661 Maximum Queue Entries: 2048 00:08:53.661 Contiguous Queues Required: Yes 00:08:53.661 Arbitration Mechanisms Supported 00:08:53.661 Weighted Round Robin: Not Supported 00:08:53.661 Vendor Specific: Not Supported 00:08:53.661 Reset Timeout: 7500 ms 00:08:53.661 Doorbell Stride: 4 bytes 00:08:53.661 NVM Subsystem Reset: Not Supported 00:08:53.661 Command Sets Supported 00:08:53.661 NVM Command Set: Supported 00:08:53.661 Boot Partition: Not Supported 00:08:53.661 Memory Page Size Minimum: 4096 bytes 00:08:53.661 Memory Page Size Maximum: 65536 bytes 00:08:53.661 Persistent Memory Region: Not Supported 00:08:53.661 Optional Asynchronous Events Supported 00:08:53.661 Namespace Attribute Notices: Supported 00:08:53.661 Firmware Activation Notices: Not Supported 00:08:53.661 ANA Change Notices: Not Supported 00:08:53.661 PLE Aggregate Log Change Notices: Not Supported 00:08:53.661 LBA Status Info Alert Notices: Not Supported 00:08:53.661 EGE Aggregate Log Change Notices: Not Supported 00:08:53.661 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.661 Zone Descriptor Change Notices: Not Supported 00:08:53.661 Discovery Log Change Notices: Not Supported 00:08:53.661 Controller Attributes 00:08:53.661 128-bit Host Identifier: Not Supported 00:08:53.661 Non-Operational Permissive Mode: Not Supported 00:08:53.661 NVM Sets: Not Supported 00:08:53.661 Read Recovery Levels: Not Supported 00:08:53.661 Endurance Groups: Not Supported 00:08:53.661 Predictable Latency Mode: Not Supported 00:08:53.661 Traffic Based Keep ALive: Not Supported 00:08:53.661 Namespace Granularity: Not Supported 00:08:53.661 SQ Associations: Not Supported 00:08:53.661 UUID List: Not Supported 00:08:53.661 Multi-Domain Subsystem: Not Supported 00:08:53.661 Fixed Capacity Management: Not Supported 00:08:53.661 Variable Capacity Management: Not Supported 00:08:53.661 Delete Endurance Group: Not Supported 00:08:53.661 Delete NVM Set: Not Supported 00:08:53.661 Extended LBA Formats Supported: Supported 00:08:53.661 Flexible Data Placement Supported: Not Supported 00:08:53.661 00:08:53.661 Controller Memory Buffer Support 00:08:53.661 ================================ 00:08:53.661 Supported: No 00:08:53.661 00:08:53.661 Persistent Memory Region Support 00:08:53.661 ================================ 00:08:53.661 Supported: No 00:08:53.661 00:08:53.661 Admin Command Set Attributes 00:08:53.661 ============================ 00:08:53.661 Security Send/Receive: Not Supported 00:08:53.661 Format NVM: Supported 00:08:53.661 Firmware Activate/Download: Not Supported 00:08:53.661 Namespace Management: Supported 00:08:53.661 Device Self-Test: Not Supported 00:08:53.661 Directives: Supported 00:08:53.661 NVMe-MI: Not Supported 00:08:53.661 Virtualization Management: Not Supported 00:08:53.661 Doorbell Buffer Config: Supported 00:08:53.661 Get LBA Status Capability: Not Supported 00:08:53.661 Command & Feature Lockdown Capability: Not Supported 00:08:53.661 Abort Command Limit: 4 00:08:53.661 Async Event Request Limit: 4 00:08:53.661 Number of Firmware Slots: N/A 00:08:53.661 Firmware Slot 1 Read-Only: N/A 00:08:53.661 Firmware Activation Without Reset: N/A 00:08:53.661 Multiple Update Detection Support: N/A 00:08:53.661 Firmware Update Granularity: No Information Provided 00:08:53.661 Per-Namespace SMART Log: Yes 00:08:53.661 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.661 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:53.661 Command Effects Log Page: Supported 00:08:53.661 Get Log Page Extended Data: Supported 00:08:53.661 Telemetry Log Pages: Not Supported 00:08:53.661 Persistent Event Log Pages: Not Supported 00:08:53.661 Supported Log Pages Log Page: May Support 00:08:53.661 Commands Supported & Effects Log Page: Not Supported 00:08:53.661 Feature Identifiers & Effects Log Page:May Support 00:08:53.661 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.661 Data Area 4 for Telemetry Log: Not Supported 00:08:53.661 Error Log Page Entries Supported: 1 00:08:53.661 Keep Alive: Not Supported 00:08:53.661 00:08:53.661 NVM Command Set Attributes 00:08:53.661 ========================== 00:08:53.661 Submission Queue Entry Size 00:08:53.661 Max: 64 00:08:53.661 Min: 64 00:08:53.661 Completion Queue Entry Size 00:08:53.661 Max: 16 00:08:53.661 Min: 16 00:08:53.661 Number of Namespaces: 256 00:08:53.661 Compare Command: Supported 00:08:53.661 Write Uncorrectable Command: Not Supported 00:08:53.661 Dataset Management Command: Supported 00:08:53.661 Write Zeroes Command: Supported 00:08:53.661 Set Features Save Field: Supported 00:08:53.661 Reservations: Not Supported 00:08:53.661 Timestamp: Supported 00:08:53.661 Copy: Supported 00:08:53.661 Volatile Write Cache: Present 00:08:53.661 Atomic Write Unit (Normal): 1 00:08:53.661 Atomic Write Unit (PFail): 1 00:08:53.661 Atomic Compare & Write Unit: 1 00:08:53.661 Fused Compare & Write: Not Supported 00:08:53.662 Scatter-Gather List 00:08:53.662 SGL Command Set: Supported 00:08:53.662 SGL Keyed: Not Supported 00:08:53.662 SGL Bit Bucket Descriptor: Not Supported 00:08:53.662 SGL Metadata Pointer: Not Supported 00:08:53.662 Oversized SGL: Not Supported 00:08:53.662 SGL Metadata Address: Not Supported 00:08:53.662 SGL Offset: Not Supported 00:08:53.662 Transport SGL Data Block: Not Supported 00:08:53.662 Replay Protected Memory Block: Not Supported 00:08:53.662 00:08:53.662 Firmware Slot Information 00:08:53.662 ========================= 00:08:53.662 Active slot: 1 00:08:53.662 Slot 1 Firmware Revision: 1.0 00:08:53.662 00:08:53.662 00:08:53.662 Commands Supported and Effects 00:08:53.662 ============================== 00:08:53.662 Admin Commands 00:08:53.662 -------------- 00:08:53.662 Delete I/O Submission Queue (00h): Supported 00:08:53.662 Create I/O Submission Queue (01h): Supported 00:08:53.662 Get Log Page (02h): Supported 00:08:53.662 Delete I/O Completion Queue (04h): Supported 00:08:53.662 Create I/O Completion Queue (05h): Supported 00:08:53.662 Identify (06h): Supported 00:08:53.662 Abort (08h): Supported 00:08:53.662 Set Features (09h): Supported 00:08:53.662 Get Features (0Ah): Supported 00:08:53.662 Asynchronous Event Request (0Ch): Supported 00:08:53.662 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.662 Directive Send (19h): Supported 00:08:53.662 Directive Receive (1Ah): Supported 00:08:53.662 Virtualization Management (1Ch): Supported 00:08:53.662 Doorbell Buffer Config (7Ch): Supported 00:08:53.662 Format NVM (80h): Supported LBA-Change 00:08:53.662 I/O Commands 00:08:53.662 ------------ 00:08:53.662 Flush (00h): Supported LBA-Change 00:08:53.662 Write (01h): Supported LBA-Change 00:08:53.662 Read (02h): Supported 00:08:53.662 Compare (05h): Supported 00:08:53.662 Write Zeroes (08h): Supported LBA-Change 00:08:53.662 Dataset Management (09h): Supported LBA-Change 00:08:53.662 Unknown (0Ch): Supported 00:08:53.662 Unknown (12h): Supported 00:08:53.662 Copy (19h): Supported LBA-Change 00:08:53.662 Unknown (1Dh): Supported LBA-Change 00:08:53.662 00:08:53.662 Error Log 00:08:53.662 ========= 00:08:53.662 00:08:53.662 Arbitration 00:08:53.662 =========== 00:08:53.662 Arbitration Burst: no limit 00:08:53.662 00:08:53.662 Power Management 00:08:53.662 ================ 00:08:53.662 Number of Power States: 1 00:08:53.662 Current Power State: Power State #0 00:08:53.662 Power State #0: 00:08:53.662 Max Power: 25.00 W 00:08:53.662 Non-Operational State: Operational 00:08:53.662 Entry Latency: 16 microseconds 00:08:53.662 Exit Latency: 4 microseconds 00:08:53.662 Relative Read Throughput: 0 00:08:53.662 Relative Read Latency: 0 00:08:53.662 Relative Write Throughput: 0 00:08:53.662 Relative Write Latency: 0 00:08:53.662 Idle Power: Not Reported 00:08:53.662 Active Power: Not Reported 00:08:53.662 Non-Operational Permissive Mode: Not Supported 00:08:53.662 00:08:53.662 Health Information 00:08:53.662 ================== 00:08:53.662 Critical Warnings: 00:08:53.662 Available Spare Space: OK 00:08:53.662 Temperature: OK 00:08:53.662 Device Reliability: OK 00:08:53.662 Read Only: No 00:08:53.662 Volatile Memory Backup: OK 00:08:53.662 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.662 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.662 Available Spare: 0% 00:08:53.662 Available Spare Threshold: 0% 00:08:53.662 Life Percentage Used: 0% 00:08:53.662 Data Units Read: 2444 00:08:53.662 Data Units Written: 2231 00:08:53.662 Host Read Commands: 126190 00:08:53.662 Host Write Commands: 124459 00:08:53.662 Controller Busy Time: 0 minutes 00:08:53.662 Power Cycles: 0 00:08:53.662 Power On Hours: 0 hours 00:08:53.662 Unsafe Shutdowns: 0 00:08:53.662 Unrecoverable Media Errors: 0 00:08:53.662 Lifetime Error Log Entries: 0 00:08:53.662 Warning Temperature Time: 0 minutes 00:08:53.662 Critical Temperature Time: 0 minutes 00:08:53.662 00:08:53.662 Number of Queues 00:08:53.662 ================ 00:08:53.662 Number of I/O Submission Queues: 64 00:08:53.662 Number of I/O Completion Queues: 64 00:08:53.662 00:08:53.662 ZNS Specific Controller Data 00:08:53.662 ============================ 00:08:53.662 Zone Append Size Limit: 0 00:08:53.662 00:08:53.662 00:08:53.662 Active Namespaces 00:08:53.662 ================= 00:08:53.662 Namespace ID:1 00:08:53.662 Error Recovery Timeout: Unlimited 00:08:53.662 Command Set Identifier: NVM (00h) 00:08:53.662 Deallocate: Supported 00:08:53.662 Deallocated/Unwritten Error: Supported 00:08:53.662 Deallocated Read Value: All 0x00 00:08:53.662 Deallocate in Write Zeroes: Not Supported 00:08:53.662 Deallocated Guard Field: 0xFFFF 00:08:53.662 Flush: Supported 00:08:53.662 Reservation: Not Supported 00:08:53.662 Namespace Sharing Capabilities: Private 00:08:53.662 Size (in LBAs): 1048576 (4GiB) 00:08:53.662 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.662 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.662 Thin Provisioning: Not Supported 00:08:53.662 Per-NS Atomic Units: No 00:08:53.662 Maximum Single Source Range Length: 128 00:08:53.662 Maximum Copy Length: 128 00:08:53.662 Maximum Source Range Count: 128 00:08:53.662 NGUID/EUI64 Never Reused: No 00:08:53.662 Namespace Write Protected: No 00:08:53.662 Number of LBA Formats: 8 00:08:53.662 Current LBA Format: LBA Format #04 00:08:53.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.662 00:08:53.662 NVM Specific Namespace Data 00:08:53.662 =========================== 00:08:53.662 Logical Block Storage Tag Mask: 0 00:08:53.662 Protection Information Capabilities: 00:08:53.662 16b Guard Protection Information Storage Tag Support: No 00:08:53.662 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.662 Storage Tag Check Read Support: No 00:08:53.662 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Namespace ID:2 00:08:53.662 Error Recovery Timeout: Unlimited 00:08:53.662 Command Set Identifier: NVM (00h) 00:08:53.662 Deallocate: Supported 00:08:53.662 Deallocated/Unwritten Error: Supported 00:08:53.662 Deallocated Read Value: All 0x00 00:08:53.662 Deallocate in Write Zeroes: Not Supported 00:08:53.662 Deallocated Guard Field: 0xFFFF 00:08:53.662 Flush: Supported 00:08:53.662 Reservation: Not Supported 00:08:53.662 Namespace Sharing Capabilities: Private 00:08:53.662 Size (in LBAs): 1048576 (4GiB) 00:08:53.662 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.662 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.662 Thin Provisioning: Not Supported 00:08:53.662 Per-NS Atomic Units: No 00:08:53.662 Maximum Single Source Range Length: 128 00:08:53.662 Maximum Copy Length: 128 00:08:53.662 Maximum Source Range Count: 128 00:08:53.662 NGUID/EUI64 Never Reused: No 00:08:53.662 Namespace Write Protected: No 00:08:53.662 Number of LBA Formats: 8 00:08:53.662 Current LBA Format: LBA Format #04 00:08:53.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.662 00:08:53.662 NVM Specific Namespace Data 00:08:53.662 =========================== 00:08:53.662 Logical Block Storage Tag Mask: 0 00:08:53.662 Protection Information Capabilities: 00:08:53.662 16b Guard Protection Information Storage Tag Support: No 00:08:53.662 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.662 Storage Tag Check Read Support: No 00:08:53.662 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.662 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Namespace ID:3 00:08:53.663 Error Recovery Timeout: Unlimited 00:08:53.663 Command Set Identifier: NVM (00h) 00:08:53.663 Deallocate: Supported 00:08:53.663 Deallocated/Unwritten Error: Supported 00:08:53.663 Deallocated Read Value: All 0x00 00:08:53.663 Deallocate in Write Zeroes: Not Supported 00:08:53.663 Deallocated Guard Field: 0xFFFF 00:08:53.663 Flush: Supported 00:08:53.663 Reservation: Not Supported 00:08:53.663 Namespace Sharing Capabilities: Private 00:08:53.663 Size (in LBAs): 1048576 (4GiB) 00:08:53.663 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.663 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.663 Thin Provisioning: Not Supported 00:08:53.663 Per-NS Atomic Units: No 00:08:53.663 Maximum Single Source Range Length: 128 00:08:53.663 Maximum Copy Length: 128 00:08:53.663 Maximum Source Range Count: 128 00:08:53.663 NGUID/EUI64 Never Reused: No 00:08:53.663 Namespace Write Protected: No 00:08:53.663 Number of LBA Formats: 8 00:08:53.663 Current LBA Format: LBA Format #04 00:08:53.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.663 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.663 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.663 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.663 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.663 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.663 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.663 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.663 00:08:53.663 NVM Specific Namespace Data 00:08:53.663 =========================== 00:08:53.663 Logical Block Storage Tag Mask: 0 00:08:53.663 Protection Information Capabilities: 00:08:53.663 16b Guard Protection Information Storage Tag Support: No 00:08:53.663 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.663 Storage Tag Check Read Support: No 00:08:53.663 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.663 14:41:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.663 14:41:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:53.922 ===================================================== 00:08:53.922 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.922 ===================================================== 00:08:53.922 Controller Capabilities/Features 00:08:53.922 ================================ 00:08:53.922 Vendor ID: 1b36 00:08:53.922 Subsystem Vendor ID: 1af4 00:08:53.922 Serial Number: 12343 00:08:53.922 Model Number: QEMU NVMe Ctrl 00:08:53.922 Firmware Version: 8.0.0 00:08:53.922 Recommended Arb Burst: 6 00:08:53.922 IEEE OUI Identifier: 00 54 52 00:08:53.922 Multi-path I/O 00:08:53.922 May have multiple subsystem ports: No 00:08:53.922 May have multiple controllers: Yes 00:08:53.922 Associated with SR-IOV VF: No 00:08:53.922 Max Data Transfer Size: 524288 00:08:53.922 Max Number of Namespaces: 256 00:08:53.922 Max Number of I/O Queues: 64 00:08:53.922 NVMe Specification Version (VS): 1.4 00:08:53.922 NVMe Specification Version (Identify): 1.4 00:08:53.922 Maximum Queue Entries: 2048 00:08:53.922 Contiguous Queues Required: Yes 00:08:53.922 Arbitration Mechanisms Supported 00:08:53.922 Weighted Round Robin: Not Supported 00:08:53.922 Vendor Specific: Not Supported 00:08:53.922 Reset Timeout: 7500 ms 00:08:53.922 Doorbell Stride: 4 bytes 00:08:53.922 NVM Subsystem Reset: Not Supported 00:08:53.922 Command Sets Supported 00:08:53.922 NVM Command Set: Supported 00:08:53.922 Boot Partition: Not Supported 00:08:53.922 Memory Page Size Minimum: 4096 bytes 00:08:53.922 Memory Page Size Maximum: 65536 bytes 00:08:53.922 Persistent Memory Region: Not Supported 00:08:53.922 Optional Asynchronous Events Supported 00:08:53.922 Namespace Attribute Notices: Supported 00:08:53.922 Firmware Activation Notices: Not Supported 00:08:53.922 ANA Change Notices: Not Supported 00:08:53.922 PLE Aggregate Log Change Notices: Not Supported 00:08:53.922 LBA Status Info Alert Notices: Not Supported 00:08:53.922 EGE Aggregate Log Change Notices: Not Supported 00:08:53.922 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.922 Zone Descriptor Change Notices: Not Supported 00:08:53.922 Discovery Log Change Notices: Not Supported 00:08:53.922 Controller Attributes 00:08:53.922 128-bit Host Identifier: Not Supported 00:08:53.922 Non-Operational Permissive Mode: Not Supported 00:08:53.922 NVM Sets: Not Supported 00:08:53.922 Read Recovery Levels: Not Supported 00:08:53.922 Endurance Groups: Supported 00:08:53.922 Predictable Latency Mode: Not Supported 00:08:53.922 Traffic Based Keep ALive: Not Supported 00:08:53.922 Namespace Granularity: Not Supported 00:08:53.922 SQ Associations: Not Supported 00:08:53.922 UUID List: Not Supported 00:08:53.922 Multi-Domain Subsystem: Not Supported 00:08:53.922 Fixed Capacity Management: Not Supported 00:08:53.922 Variable Capacity Management: Not Supported 00:08:53.922 Delete Endurance Group: Not Supported 00:08:53.922 Delete NVM Set: Not Supported 00:08:53.922 Extended LBA Formats Supported: Supported 00:08:53.922 Flexible Data Placement Supported: Supported 00:08:53.922 00:08:53.922 Controller Memory Buffer Support 00:08:53.922 ================================ 00:08:53.922 Supported: No 00:08:53.922 00:08:53.922 Persistent Memory Region Support 00:08:53.922 ================================ 00:08:53.922 Supported: No 00:08:53.922 00:08:53.922 Admin Command Set Attributes 00:08:53.922 ============================ 00:08:53.922 Security Send/Receive: Not Supported 00:08:53.922 Format NVM: Supported 00:08:53.922 Firmware Activate/Download: Not Supported 00:08:53.922 Namespace Management: Supported 00:08:53.922 Device Self-Test: Not Supported 00:08:53.922 Directives: Supported 00:08:53.922 NVMe-MI: Not Supported 00:08:53.922 Virtualization Management: Not Supported 00:08:53.922 Doorbell Buffer Config: Supported 00:08:53.922 Get LBA Status Capability: Not Supported 00:08:53.922 Command & Feature Lockdown Capability: Not Supported 00:08:53.922 Abort Command Limit: 4 00:08:53.922 Async Event Request Limit: 4 00:08:53.922 Number of Firmware Slots: N/A 00:08:53.922 Firmware Slot 1 Read-Only: N/A 00:08:53.922 Firmware Activation Without Reset: N/A 00:08:53.922 Multiple Update Detection Support: N/A 00:08:53.922 Firmware Update Granularity: No Information Provided 00:08:53.922 Per-Namespace SMART Log: Yes 00:08:53.922 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.922 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:53.922 Command Effects Log Page: Supported 00:08:53.922 Get Log Page Extended Data: Supported 00:08:53.922 Telemetry Log Pages: Not Supported 00:08:53.922 Persistent Event Log Pages: Not Supported 00:08:53.922 Supported Log Pages Log Page: May Support 00:08:53.922 Commands Supported & Effects Log Page: Not Supported 00:08:53.922 Feature Identifiers & Effects Log Page:May Support 00:08:53.922 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.922 Data Area 4 for Telemetry Log: Not Supported 00:08:53.922 Error Log Page Entries Supported: 1 00:08:53.922 Keep Alive: Not Supported 00:08:53.922 00:08:53.922 NVM Command Set Attributes 00:08:53.922 ========================== 00:08:53.922 Submission Queue Entry Size 00:08:53.922 Max: 64 00:08:53.922 Min: 64 00:08:53.922 Completion Queue Entry Size 00:08:53.922 Max: 16 00:08:53.922 Min: 16 00:08:53.922 Number of Namespaces: 256 00:08:53.922 Compare Command: Supported 00:08:53.922 Write Uncorrectable Command: Not Supported 00:08:53.922 Dataset Management Command: Supported 00:08:53.922 Write Zeroes Command: Supported 00:08:53.922 Set Features Save Field: Supported 00:08:53.922 Reservations: Not Supported 00:08:53.922 Timestamp: Supported 00:08:53.922 Copy: Supported 00:08:53.922 Volatile Write Cache: Present 00:08:53.922 Atomic Write Unit (Normal): 1 00:08:53.922 Atomic Write Unit (PFail): 1 00:08:53.922 Atomic Compare & Write Unit: 1 00:08:53.922 Fused Compare & Write: Not Supported 00:08:53.922 Scatter-Gather List 00:08:53.922 SGL Command Set: Supported 00:08:53.922 SGL Keyed: Not Supported 00:08:53.922 SGL Bit Bucket Descriptor: Not Supported 00:08:53.922 SGL Metadata Pointer: Not Supported 00:08:53.922 Oversized SGL: Not Supported 00:08:53.922 SGL Metadata Address: Not Supported 00:08:53.922 SGL Offset: Not Supported 00:08:53.922 Transport SGL Data Block: Not Supported 00:08:53.922 Replay Protected Memory Block: Not Supported 00:08:53.922 00:08:53.922 Firmware Slot Information 00:08:53.922 ========================= 00:08:53.923 Active slot: 1 00:08:53.923 Slot 1 Firmware Revision: 1.0 00:08:53.923 00:08:53.923 00:08:53.923 Commands Supported and Effects 00:08:53.923 ============================== 00:08:53.923 Admin Commands 00:08:53.923 -------------- 00:08:53.923 Delete I/O Submission Queue (00h): Supported 00:08:53.923 Create I/O Submission Queue (01h): Supported 00:08:53.923 Get Log Page (02h): Supported 00:08:53.923 Delete I/O Completion Queue (04h): Supported 00:08:53.923 Create I/O Completion Queue (05h): Supported 00:08:53.923 Identify (06h): Supported 00:08:53.923 Abort (08h): Supported 00:08:53.923 Set Features (09h): Supported 00:08:53.923 Get Features (0Ah): Supported 00:08:53.923 Asynchronous Event Request (0Ch): Supported 00:08:53.923 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.923 Directive Send (19h): Supported 00:08:53.923 Directive Receive (1Ah): Supported 00:08:53.923 Virtualization Management (1Ch): Supported 00:08:53.923 Doorbell Buffer Config (7Ch): Supported 00:08:53.923 Format NVM (80h): Supported LBA-Change 00:08:53.923 I/O Commands 00:08:53.923 ------------ 00:08:53.923 Flush (00h): Supported LBA-Change 00:08:53.923 Write (01h): Supported LBA-Change 00:08:53.923 Read (02h): Supported 00:08:53.923 Compare (05h): Supported 00:08:53.923 Write Zeroes (08h): Supported LBA-Change 00:08:53.923 Dataset Management (09h): Supported LBA-Change 00:08:53.923 Unknown (0Ch): Supported 00:08:53.923 Unknown (12h): Supported 00:08:53.923 Copy (19h): Supported LBA-Change 00:08:53.923 Unknown (1Dh): Supported LBA-Change 00:08:53.923 00:08:53.923 Error Log 00:08:53.923 ========= 00:08:53.923 00:08:53.923 Arbitration 00:08:53.923 =========== 00:08:53.923 Arbitration Burst: no limit 00:08:53.923 00:08:53.923 Power Management 00:08:53.923 ================ 00:08:53.923 Number of Power States: 1 00:08:53.923 Current Power State: Power State #0 00:08:53.923 Power State #0: 00:08:53.923 Max Power: 25.00 W 00:08:53.923 Non-Operational State: Operational 00:08:53.923 Entry Latency: 16 microseconds 00:08:53.923 Exit Latency: 4 microseconds 00:08:53.923 Relative Read Throughput: 0 00:08:53.923 Relative Read Latency: 0 00:08:53.923 Relative Write Throughput: 0 00:08:53.923 Relative Write Latency: 0 00:08:53.923 Idle Power: Not Reported 00:08:53.923 Active Power: Not Reported 00:08:53.923 Non-Operational Permissive Mode: Not Supported 00:08:53.923 00:08:53.923 Health Information 00:08:53.923 ================== 00:08:53.923 Critical Warnings: 00:08:53.923 Available Spare Space: OK 00:08:53.923 Temperature: OK 00:08:53.923 Device Reliability: OK 00:08:53.923 Read Only: No 00:08:53.923 Volatile Memory Backup: OK 00:08:53.923 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.923 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.923 Available Spare: 0% 00:08:53.923 Available Spare Threshold: 0% 00:08:53.923 Life Percentage Used: 0% 00:08:53.923 Data Units Read: 1151 00:08:53.923 Data Units Written: 1080 00:08:53.923 Host Read Commands: 44773 00:08:53.923 Host Write Commands: 44196 00:08:53.923 Controller Busy Time: 0 minutes 00:08:53.923 Power Cycles: 0 00:08:53.923 Power On Hours: 0 hours 00:08:53.923 Unsafe Shutdowns: 0 00:08:53.923 Unrecoverable Media Errors: 0 00:08:53.923 Lifetime Error Log Entries: 0 00:08:53.923 Warning Temperature Time: 0 minutes 00:08:53.923 Critical Temperature Time: 0 minutes 00:08:53.923 00:08:53.923 Number of Queues 00:08:53.923 ================ 00:08:53.923 Number of I/O Submission Queues: 64 00:08:53.923 Number of I/O Completion Queues: 64 00:08:53.923 00:08:53.923 ZNS Specific Controller Data 00:08:53.923 ============================ 00:08:53.923 Zone Append Size Limit: 0 00:08:53.923 00:08:53.923 00:08:53.923 Active Namespaces 00:08:53.923 ================= 00:08:53.923 Namespace ID:1 00:08:53.923 Error Recovery Timeout: Unlimited 00:08:53.923 Command Set Identifier: NVM (00h) 00:08:53.923 Deallocate: Supported 00:08:53.923 Deallocated/Unwritten Error: Supported 00:08:53.923 Deallocated Read Value: All 0x00 00:08:53.923 Deallocate in Write Zeroes: Not Supported 00:08:53.923 Deallocated Guard Field: 0xFFFF 00:08:53.923 Flush: Supported 00:08:53.923 Reservation: Not Supported 00:08:53.923 Namespace Sharing Capabilities: Multiple Controllers 00:08:53.923 Size (in LBAs): 262144 (1GiB) 00:08:53.923 Capacity (in LBAs): 262144 (1GiB) 00:08:53.923 Utilization (in LBAs): 262144 (1GiB) 00:08:53.923 Thin Provisioning: Not Supported 00:08:53.923 Per-NS Atomic Units: No 00:08:53.923 Maximum Single Source Range Length: 128 00:08:53.923 Maximum Copy Length: 128 00:08:53.923 Maximum Source Range Count: 128 00:08:53.923 NGUID/EUI64 Never Reused: No 00:08:53.923 Namespace Write Protected: No 00:08:53.923 Endurance group ID: 1 00:08:53.923 Number of LBA Formats: 8 00:08:53.923 Current LBA Format: LBA Format #04 00:08:53.923 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.923 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.923 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.923 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.923 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.923 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.923 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.923 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.923 00:08:53.923 Get Feature FDP: 00:08:53.923 ================ 00:08:53.923 Enabled: Yes 00:08:53.923 FDP configuration index: 0 00:08:53.923 00:08:53.923 FDP configurations log page 00:08:53.923 =========================== 00:08:53.923 Number of FDP configurations: 1 00:08:53.923 Version: 0 00:08:53.923 Size: 112 00:08:53.923 FDP Configuration Descriptor: 0 00:08:53.923 Descriptor Size: 96 00:08:53.923 Reclaim Group Identifier format: 2 00:08:53.923 FDP Volatile Write Cache: Not Present 00:08:53.923 FDP Configuration: Valid 00:08:53.923 Vendor Specific Size: 0 00:08:53.923 Number of Reclaim Groups: 2 00:08:53.923 Number of Recalim Unit Handles: 8 00:08:53.923 Max Placement Identifiers: 128 00:08:53.923 Number of Namespaces Suppprted: 256 00:08:53.923 Reclaim unit Nominal Size: 6000000 bytes 00:08:53.923 Estimated Reclaim Unit Time Limit: Not Reported 00:08:53.923 RUH Desc #000: RUH Type: Initially Isolated 00:08:53.923 RUH Desc #001: RUH Type: Initially Isolated 00:08:53.923 RUH Desc #002: RUH Type: Initially Isolated 00:08:53.923 RUH Desc #003: RUH Type: Initially Isolated 00:08:53.923 RUH Desc #004: RUH Type: Initially Isolated 00:08:53.923 RUH Desc #005: RUH Type: Initially Isolated 00:08:53.923 RUH Desc #006: RUH Type: Initially Isolated 00:08:53.923 RUH Desc #007: RUH Type: Initially Isolated 00:08:53.923 00:08:53.923 FDP reclaim unit handle usage log page 00:08:53.923 ====================================== 00:08:53.923 Number of Reclaim Unit Handles: 8 00:08:53.923 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:53.923 RUH Usage Desc #001: RUH Attributes: Unused 00:08:53.923 RUH Usage Desc #002: RUH Attributes: Unused 00:08:53.923 RUH Usage Desc #003: RUH Attributes: Unused 00:08:53.923 RUH Usage Desc #004: RUH Attributes: Unused 00:08:53.923 RUH Usage Desc #005: RUH Attributes: Unused 00:08:53.923 RUH Usage Desc #006: RUH Attributes: Unused 00:08:53.923 RUH Usage Desc #007: RUH Attributes: Unused 00:08:53.923 00:08:53.923 FDP statistics log page 00:08:53.923 ======================= 00:08:53.923 Host bytes with metadata written: 661233664 00:08:53.923 Media bytes with metadata written: 661315584 00:08:53.923 Media bytes erased: 0 00:08:53.923 00:08:53.923 FDP events log page 00:08:53.923 =================== 00:08:53.923 Number of FDP events: 0 00:08:53.923 00:08:53.923 NVM Specific Namespace Data 00:08:53.923 =========================== 00:08:53.923 Logical Block Storage Tag Mask: 0 00:08:53.923 Protection Information Capabilities: 00:08:53.923 16b Guard Protection Information Storage Tag Support: No 00:08:53.923 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.923 Storage Tag Check Read Support: No 00:08:53.923 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.923 00:08:53.923 real 0m1.246s 00:08:53.923 user 0m0.461s 00:08:53.923 sys 0m0.549s 00:08:53.923 14:41:31 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.923 14:41:31 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:53.923 ************************************ 00:08:53.924 END TEST nvme_identify 00:08:53.924 ************************************ 00:08:53.924 14:41:31 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:53.924 14:41:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.924 14:41:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.924 14:41:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.924 ************************************ 00:08:53.924 START TEST nvme_perf 00:08:53.924 ************************************ 00:08:53.924 14:41:31 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:53.924 14:41:31 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:55.305 Initializing NVMe Controllers 00:08:55.305 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.305 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.305 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.305 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.305 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.305 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.305 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.305 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.305 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.305 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.305 Initialization complete. Launching workers. 00:08:55.305 ======================================================== 00:08:55.305 Latency(us) 00:08:55.305 Device Information : IOPS MiB/s Average min max 00:08:55.305 PCIE (0000:00:10.0) NSID 1 from core 0: 18044.81 211.46 7102.63 6008.88 37387.12 00:08:55.305 PCIE (0000:00:11.0) NSID 1 from core 0: 18044.81 211.46 7091.96 6105.57 35489.27 00:08:55.305 PCIE (0000:00:13.0) NSID 1 from core 0: 18044.81 211.46 7080.09 6102.38 34013.51 00:08:55.305 PCIE (0000:00:12.0) NSID 1 from core 0: 18044.81 211.46 7068.04 6088.36 32083.10 00:08:55.305 PCIE (0000:00:12.0) NSID 2 from core 0: 18044.81 211.46 7056.00 6110.58 30151.42 00:08:55.305 PCIE (0000:00:12.0) NSID 3 from core 0: 18108.79 212.21 7019.06 6088.43 24676.82 00:08:55.305 ======================================================== 00:08:55.305 Total : 108332.83 1269.53 7069.60 6008.88 37387.12 00:08:55.305 00:08:55.305 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.305 ================================================================================= 00:08:55.305 1.00000% : 6125.095us 00:08:55.305 10.00000% : 6301.538us 00:08:55.305 25.00000% : 6503.188us 00:08:55.305 50.00000% : 6805.662us 00:08:55.305 75.00000% : 7108.135us 00:08:55.305 90.00000% : 7360.197us 00:08:55.305 95.00000% : 7813.908us 00:08:55.305 98.00000% : 10788.234us 00:08:55.305 99.00000% : 14317.095us 00:08:55.306 99.50000% : 32062.228us 00:08:55.306 99.90000% : 36901.809us 00:08:55.306 99.99000% : 37506.757us 00:08:55.306 99.99900% : 37506.757us 00:08:55.306 99.99990% : 37506.757us 00:08:55.306 99.99999% : 37506.757us 00:08:55.306 00:08:55.306 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.306 ================================================================================= 00:08:55.306 1.00000% : 6225.920us 00:08:55.306 10.00000% : 6351.951us 00:08:55.306 25.00000% : 6553.600us 00:08:55.306 50.00000% : 6805.662us 00:08:55.306 75.00000% : 7057.723us 00:08:55.306 90.00000% : 7309.785us 00:08:55.306 95.00000% : 7914.732us 00:08:55.306 98.00000% : 10435.348us 00:08:55.306 99.00000% : 13611.323us 00:08:55.306 99.50000% : 30045.735us 00:08:55.306 99.90000% : 35086.966us 00:08:55.306 99.99000% : 35490.265us 00:08:55.306 99.99900% : 35490.265us 00:08:55.306 99.99990% : 35490.265us 00:08:55.306 99.99999% : 35490.265us 00:08:55.306 00:08:55.306 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.306 ================================================================================= 00:08:55.306 1.00000% : 6225.920us 00:08:55.306 10.00000% : 6351.951us 00:08:55.306 25.00000% : 6553.600us 00:08:55.306 50.00000% : 6805.662us 00:08:55.306 75.00000% : 7057.723us 00:08:55.306 90.00000% : 7309.785us 00:08:55.306 95.00000% : 7864.320us 00:08:55.306 98.00000% : 10939.471us 00:08:55.306 99.00000% : 13611.323us 00:08:55.306 99.50000% : 28634.191us 00:08:55.306 99.90000% : 33675.422us 00:08:55.306 99.99000% : 34078.720us 00:08:55.306 99.99900% : 34078.720us 00:08:55.306 99.99990% : 34078.720us 00:08:55.306 99.99999% : 34078.720us 00:08:55.306 00:08:55.306 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.306 ================================================================================= 00:08:55.306 1.00000% : 6200.714us 00:08:55.306 10.00000% : 6351.951us 00:08:55.306 25.00000% : 6553.600us 00:08:55.306 50.00000% : 6805.662us 00:08:55.306 75.00000% : 7057.723us 00:08:55.306 90.00000% : 7309.785us 00:08:55.306 95.00000% : 7914.732us 00:08:55.306 98.00000% : 11090.708us 00:08:55.306 99.00000% : 13712.148us 00:08:55.306 99.50000% : 26617.698us 00:08:55.306 99.90000% : 31658.929us 00:08:55.306 99.99000% : 32062.228us 00:08:55.306 99.99900% : 32263.877us 00:08:55.306 99.99990% : 32263.877us 00:08:55.306 99.99999% : 32263.877us 00:08:55.306 00:08:55.306 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.306 ================================================================================= 00:08:55.306 1.00000% : 6225.920us 00:08:55.306 10.00000% : 6351.951us 00:08:55.306 25.00000% : 6553.600us 00:08:55.306 50.00000% : 6805.662us 00:08:55.306 75.00000% : 7057.723us 00:08:55.306 90.00000% : 7309.785us 00:08:55.306 95.00000% : 7864.320us 00:08:55.306 98.00000% : 11241.945us 00:08:55.306 99.00000% : 14115.446us 00:08:55.306 99.50000% : 24702.031us 00:08:55.306 99.90000% : 29844.086us 00:08:55.306 99.99000% : 30247.385us 00:08:55.306 99.99900% : 30247.385us 00:08:55.306 99.99990% : 30247.385us 00:08:55.306 99.99999% : 30247.385us 00:08:55.306 00:08:55.306 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.306 ================================================================================= 00:08:55.306 1.00000% : 6225.920us 00:08:55.306 10.00000% : 6351.951us 00:08:55.306 25.00000% : 6553.600us 00:08:55.306 50.00000% : 6805.662us 00:08:55.306 75.00000% : 7057.723us 00:08:55.306 90.00000% : 7309.785us 00:08:55.306 95.00000% : 7864.320us 00:08:55.306 98.00000% : 10989.883us 00:08:55.306 99.00000% : 14821.218us 00:08:55.306 99.50000% : 19156.677us 00:08:55.306 99.90000% : 24298.732us 00:08:55.306 99.99000% : 24702.031us 00:08:55.306 99.99900% : 24702.031us 00:08:55.306 99.99990% : 24702.031us 00:08:55.306 99.99999% : 24702.031us 00:08:55.306 00:08:55.306 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.306 ============================================================================== 00:08:55.306 Range in us Cumulative IO count 00:08:55.306 5999.065 - 6024.271: 0.0332% ( 6) 00:08:55.306 6024.271 - 6049.477: 0.1441% ( 20) 00:08:55.306 6049.477 - 6074.683: 0.3047% ( 29) 00:08:55.306 6074.683 - 6099.889: 0.5818% ( 50) 00:08:55.306 6099.889 - 6125.095: 1.0472% ( 84) 00:08:55.306 6125.095 - 6150.302: 1.7786% ( 132) 00:08:55.306 6150.302 - 6175.508: 2.9255% ( 207) 00:08:55.306 6175.508 - 6200.714: 4.3440% ( 256) 00:08:55.306 6200.714 - 6225.920: 6.1170% ( 320) 00:08:55.306 6225.920 - 6251.126: 7.9289% ( 327) 00:08:55.306 6251.126 - 6276.332: 9.6631% ( 313) 00:08:55.306 6276.332 - 6301.538: 11.3918% ( 312) 00:08:55.306 6301.538 - 6326.745: 13.2702% ( 339) 00:08:55.306 6326.745 - 6351.951: 15.0598% ( 323) 00:08:55.306 6351.951 - 6377.157: 16.8329% ( 320) 00:08:55.306 6377.157 - 6402.363: 18.6891% ( 335) 00:08:55.306 6402.363 - 6427.569: 20.6006% ( 345) 00:08:55.306 6427.569 - 6452.775: 22.5676% ( 355) 00:08:55.306 6452.775 - 6503.188: 26.4240% ( 696) 00:08:55.306 6503.188 - 6553.600: 30.5740% ( 749) 00:08:55.306 6553.600 - 6604.012: 34.8072% ( 764) 00:08:55.306 6604.012 - 6654.425: 39.0514% ( 766) 00:08:55.306 6654.425 - 6704.837: 43.0075% ( 714) 00:08:55.306 6704.837 - 6755.249: 47.1465% ( 747) 00:08:55.306 6755.249 - 6805.662: 51.1968% ( 731) 00:08:55.306 6805.662 - 6856.074: 55.4244% ( 763) 00:08:55.306 6856.074 - 6906.486: 59.6354% ( 760) 00:08:55.306 6906.486 - 6956.898: 63.8686% ( 764) 00:08:55.306 6956.898 - 7007.311: 68.0186% ( 749) 00:08:55.306 7007.311 - 7057.723: 72.2961% ( 772) 00:08:55.306 7057.723 - 7108.135: 76.5459% ( 767) 00:08:55.306 7108.135 - 7158.548: 80.6627% ( 743) 00:08:55.306 7158.548 - 7208.960: 84.5634% ( 704) 00:08:55.306 7208.960 - 7259.372: 87.7327% ( 572) 00:08:55.306 7259.372 - 7309.785: 89.8770% ( 387) 00:08:55.306 7309.785 - 7360.197: 91.0849% ( 218) 00:08:55.306 7360.197 - 7410.609: 91.9936% ( 164) 00:08:55.306 7410.609 - 7461.022: 92.7693% ( 140) 00:08:55.306 7461.022 - 7511.434: 93.4896% ( 130) 00:08:55.306 7511.434 - 7561.846: 94.0769% ( 106) 00:08:55.306 7561.846 - 7612.258: 94.3484% ( 49) 00:08:55.306 7612.258 - 7662.671: 94.5922% ( 44) 00:08:55.306 7662.671 - 7713.083: 94.7529% ( 29) 00:08:55.306 7713.083 - 7763.495: 94.8692% ( 21) 00:08:55.306 7763.495 - 7813.908: 95.0188% ( 27) 00:08:55.306 7813.908 - 7864.320: 95.1407% ( 22) 00:08:55.306 7864.320 - 7914.732: 95.2848% ( 26) 00:08:55.306 7914.732 - 7965.145: 95.4122% ( 23) 00:08:55.306 7965.145 - 8015.557: 95.5508% ( 25) 00:08:55.306 8015.557 - 8065.969: 95.6283% ( 14) 00:08:55.306 8065.969 - 8116.382: 95.7225% ( 17) 00:08:55.306 8116.382 - 8166.794: 95.8167% ( 17) 00:08:55.306 8166.794 - 8217.206: 95.8887% ( 13) 00:08:55.306 8217.206 - 8267.618: 95.9552% ( 12) 00:08:55.306 8267.618 - 8318.031: 95.9996% ( 8) 00:08:55.306 8318.031 - 8368.443: 96.0827% ( 15) 00:08:55.306 8368.443 - 8418.855: 96.1325% ( 9) 00:08:55.306 8418.855 - 8469.268: 96.1658% ( 6) 00:08:55.306 8469.268 - 8519.680: 96.2101% ( 8) 00:08:55.306 8519.680 - 8570.092: 96.2544% ( 8) 00:08:55.306 8570.092 - 8620.505: 96.2877% ( 6) 00:08:55.306 8620.505 - 8670.917: 96.3209% ( 6) 00:08:55.306 8670.917 - 8721.329: 96.3542% ( 6) 00:08:55.306 8721.329 - 8771.742: 96.3708% ( 3) 00:08:55.306 8771.742 - 8822.154: 96.4040% ( 6) 00:08:55.306 8822.154 - 8872.566: 96.4373% ( 6) 00:08:55.306 8872.566 - 8922.978: 96.4705% ( 6) 00:08:55.306 8922.978 - 8973.391: 96.4982% ( 5) 00:08:55.306 8973.391 - 9023.803: 96.5370% ( 7) 00:08:55.306 9023.803 - 9074.215: 96.5481% ( 2) 00:08:55.306 9074.215 - 9124.628: 96.5647% ( 3) 00:08:55.306 9124.628 - 9175.040: 96.5813% ( 3) 00:08:55.306 9175.040 - 9225.452: 96.5980% ( 3) 00:08:55.306 9225.452 - 9275.865: 96.6146% ( 3) 00:08:55.306 9275.865 - 9326.277: 96.6312% ( 3) 00:08:55.306 9326.277 - 9376.689: 96.6478% ( 3) 00:08:55.306 9376.689 - 9427.102: 96.6866% ( 7) 00:08:55.306 9427.102 - 9477.514: 96.7143% ( 5) 00:08:55.306 9477.514 - 9527.926: 96.7586% ( 8) 00:08:55.306 9527.926 - 9578.338: 96.7808% ( 4) 00:08:55.306 9578.338 - 9628.751: 96.8196% ( 7) 00:08:55.306 9628.751 - 9679.163: 96.8639% ( 8) 00:08:55.306 9679.163 - 9729.575: 96.9138% ( 9) 00:08:55.306 9729.575 - 9779.988: 96.9803% ( 12) 00:08:55.306 9779.988 - 9830.400: 97.0468% ( 12) 00:08:55.306 9830.400 - 9880.812: 97.1022% ( 10) 00:08:55.306 9880.812 - 9931.225: 97.1687% ( 12) 00:08:55.306 9931.225 - 9981.637: 97.2185% ( 9) 00:08:55.306 9981.637 - 10032.049: 97.2684% ( 9) 00:08:55.306 10032.049 - 10082.462: 97.3293% ( 11) 00:08:55.306 10082.462 - 10132.874: 97.3792% ( 9) 00:08:55.306 10132.874 - 10183.286: 97.4346% ( 10) 00:08:55.306 10183.286 - 10233.698: 97.4956% ( 11) 00:08:55.306 10233.698 - 10284.111: 97.5565% ( 11) 00:08:55.306 10284.111 - 10334.523: 97.6119% ( 10) 00:08:55.306 10334.523 - 10384.935: 97.6673% ( 10) 00:08:55.306 10384.935 - 10435.348: 97.7283% ( 11) 00:08:55.306 10435.348 - 10485.760: 97.7781% ( 9) 00:08:55.306 10485.760 - 10536.172: 97.8169% ( 7) 00:08:55.306 10536.172 - 10586.585: 97.8557% ( 7) 00:08:55.306 10586.585 - 10636.997: 97.9056% ( 9) 00:08:55.306 10636.997 - 10687.409: 97.9444% ( 7) 00:08:55.306 10687.409 - 10737.822: 97.9887% ( 8) 00:08:55.306 10737.822 - 10788.234: 98.0053% ( 3) 00:08:55.306 10788.234 - 10838.646: 98.0441% ( 7) 00:08:55.306 10838.646 - 10889.058: 98.0884% ( 8) 00:08:55.306 10889.058 - 10939.471: 98.0995% ( 2) 00:08:55.306 10939.471 - 10989.883: 98.1161% ( 3) 00:08:55.306 10989.883 - 11040.295: 98.1272% ( 2) 00:08:55.306 11040.295 - 11090.708: 98.1494% ( 4) 00:08:55.306 11090.708 - 11141.120: 98.1605% ( 2) 00:08:55.306 11141.120 - 11191.532: 98.1771% ( 3) 00:08:55.306 11191.532 - 11241.945: 98.1992% ( 4) 00:08:55.306 11241.945 - 11292.357: 98.2159% ( 3) 00:08:55.306 11292.357 - 11342.769: 98.2325% ( 3) 00:08:55.306 11342.769 - 11393.182: 98.2547% ( 4) 00:08:55.306 11393.182 - 11443.594: 98.2657% ( 2) 00:08:55.306 11443.594 - 11494.006: 98.2934% ( 5) 00:08:55.306 11494.006 - 11544.418: 98.3045% ( 2) 00:08:55.306 11544.418 - 11594.831: 98.3267% ( 4) 00:08:55.306 11594.831 - 11645.243: 98.3433% ( 3) 00:08:55.306 11645.243 - 11695.655: 98.3544% ( 2) 00:08:55.306 11695.655 - 11746.068: 98.3766% ( 4) 00:08:55.306 11746.068 - 11796.480: 98.3987% ( 4) 00:08:55.306 11796.480 - 11846.892: 98.4430% ( 8) 00:08:55.306 11846.892 - 11897.305: 98.4763% ( 6) 00:08:55.306 11897.305 - 11947.717: 98.5095% ( 6) 00:08:55.306 11947.717 - 11998.129: 98.5262% ( 3) 00:08:55.306 11998.129 - 12048.542: 98.5428% ( 3) 00:08:55.306 12048.542 - 12098.954: 98.5705% ( 5) 00:08:55.306 12098.954 - 12149.366: 98.5926% ( 4) 00:08:55.306 12149.366 - 12199.778: 98.6203% ( 5) 00:08:55.306 12199.778 - 12250.191: 98.6370% ( 3) 00:08:55.306 12250.191 - 12300.603: 98.6702% ( 6) 00:08:55.306 12300.603 - 12351.015: 98.6924% ( 4) 00:08:55.306 12351.015 - 12401.428: 98.7145% ( 4) 00:08:55.306 12401.428 - 12451.840: 98.7422% ( 5) 00:08:55.306 12451.840 - 12502.252: 98.7589% ( 3) 00:08:55.306 12502.252 - 12552.665: 98.7921% ( 6) 00:08:55.306 12552.665 - 12603.077: 98.8143% ( 4) 00:08:55.306 12603.077 - 12653.489: 98.8420% ( 5) 00:08:55.306 12653.489 - 12703.902: 98.8586% ( 3) 00:08:55.306 12703.902 - 12754.314: 98.8808% ( 4) 00:08:55.306 12754.314 - 12804.726: 98.8974% ( 3) 00:08:55.306 12804.726 - 12855.138: 98.9140% ( 3) 00:08:55.306 12855.138 - 12905.551: 98.9306% ( 3) 00:08:55.306 12905.551 - 13006.375: 98.9362% ( 1) 00:08:55.306 13913.797 - 14014.622: 98.9473% ( 2) 00:08:55.306 14014.622 - 14115.446: 98.9639% ( 3) 00:08:55.306 14115.446 - 14216.271: 98.9805% ( 3) 00:08:55.306 14216.271 - 14317.095: 99.0027% ( 4) 00:08:55.306 14317.095 - 14417.920: 99.0193% ( 3) 00:08:55.306 14417.920 - 14518.745: 99.0359% ( 3) 00:08:55.306 14518.745 - 14619.569: 99.0525% ( 3) 00:08:55.306 14619.569 - 14720.394: 99.0691% ( 3) 00:08:55.306 14720.394 - 14821.218: 99.0858% ( 3) 00:08:55.306 14821.218 - 14922.043: 99.1024% ( 3) 00:08:55.306 14922.043 - 15022.868: 99.1190% ( 3) 00:08:55.306 15022.868 - 15123.692: 99.1412% ( 4) 00:08:55.306 15123.692 - 15224.517: 99.1578% ( 3) 00:08:55.306 15224.517 - 15325.342: 99.1744% ( 3) 00:08:55.306 15325.342 - 15426.166: 99.1910% ( 3) 00:08:55.306 15426.166 - 15526.991: 99.2077% ( 3) 00:08:55.306 15526.991 - 15627.815: 99.2243% ( 3) 00:08:55.306 15627.815 - 15728.640: 99.2409% ( 3) 00:08:55.306 15728.640 - 15829.465: 99.2575% ( 3) 00:08:55.306 15829.465 - 15930.289: 99.2797% ( 4) 00:08:55.306 15930.289 - 16031.114: 99.2908% ( 2) 00:08:55.306 30650.683 - 30852.332: 99.2963% ( 1) 00:08:55.306 30852.332 - 31053.982: 99.3406% ( 8) 00:08:55.306 31053.982 - 31255.631: 99.3794% ( 7) 00:08:55.306 31255.631 - 31457.280: 99.4182% ( 7) 00:08:55.306 31457.280 - 31658.929: 99.4681% ( 9) 00:08:55.306 31658.929 - 31860.578: 99.4958% ( 5) 00:08:55.306 31860.578 - 32062.228: 99.5401% ( 8) 00:08:55.306 32062.228 - 32263.877: 99.5789% ( 7) 00:08:55.306 32263.877 - 32465.526: 99.6232% ( 8) 00:08:55.306 32465.526 - 32667.175: 99.6454% ( 4) 00:08:55.306 35490.265 - 35691.914: 99.6565% ( 2) 00:08:55.306 35691.914 - 35893.563: 99.7008% ( 8) 00:08:55.306 35893.563 - 36095.212: 99.7451% ( 8) 00:08:55.306 36095.212 - 36296.862: 99.7839% ( 7) 00:08:55.306 36296.862 - 36498.511: 99.8227% ( 7) 00:08:55.306 36498.511 - 36700.160: 99.8615% ( 7) 00:08:55.306 36700.160 - 36901.809: 99.9058% ( 8) 00:08:55.306 36901.809 - 37103.458: 99.9446% ( 7) 00:08:55.306 37103.458 - 37305.108: 99.9889% ( 8) 00:08:55.306 37305.108 - 37506.757: 100.0000% ( 2) 00:08:55.306 00:08:55.306 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.306 ============================================================================== 00:08:55.306 Range in us Cumulative IO count 00:08:55.306 6099.889 - 6125.095: 0.0443% ( 8) 00:08:55.306 6125.095 - 6150.302: 0.1607% ( 21) 00:08:55.306 6150.302 - 6175.508: 0.4322% ( 49) 00:08:55.306 6175.508 - 6200.714: 0.9253% ( 89) 00:08:55.306 6200.714 - 6225.920: 1.6013% ( 122) 00:08:55.306 6225.920 - 6251.126: 2.6984% ( 198) 00:08:55.306 6251.126 - 6276.332: 4.4105% ( 309) 00:08:55.306 6276.332 - 6301.538: 6.3387% ( 348) 00:08:55.306 6301.538 - 6326.745: 8.4109% ( 374) 00:08:55.306 6326.745 - 6351.951: 10.4942% ( 376) 00:08:55.306 6351.951 - 6377.157: 12.6385% ( 387) 00:08:55.306 6377.157 - 6402.363: 14.7108% ( 374) 00:08:55.306 6402.363 - 6427.569: 16.8994% ( 395) 00:08:55.306 6427.569 - 6452.775: 18.8996% ( 361) 00:08:55.306 6452.775 - 6503.188: 23.3766% ( 808) 00:08:55.306 6503.188 - 6553.600: 27.9255% ( 821) 00:08:55.306 6553.600 - 6604.012: 32.6795% ( 858) 00:08:55.306 6604.012 - 6654.425: 37.5887% ( 886) 00:08:55.306 6654.425 - 6704.837: 42.3980% ( 868) 00:08:55.306 6704.837 - 6755.249: 47.2241% ( 871) 00:08:55.306 6755.249 - 6805.662: 52.0501% ( 871) 00:08:55.306 6805.662 - 6856.074: 56.7930% ( 856) 00:08:55.306 6856.074 - 6906.486: 61.6467% ( 876) 00:08:55.306 6906.486 - 6956.898: 66.4561% ( 868) 00:08:55.306 6956.898 - 7007.311: 71.4428% ( 900) 00:08:55.306 7007.311 - 7057.723: 76.1913% ( 857) 00:08:55.306 7057.723 - 7108.135: 80.8732% ( 845) 00:08:55.306 7108.135 - 7158.548: 84.9457% ( 735) 00:08:55.306 7158.548 - 7208.960: 87.9156% ( 536) 00:08:55.306 7208.960 - 7259.372: 89.6997% ( 322) 00:08:55.306 7259.372 - 7309.785: 90.8078% ( 200) 00:08:55.306 7309.785 - 7360.197: 91.7387% ( 168) 00:08:55.306 7360.197 - 7410.609: 92.5587% ( 148) 00:08:55.306 7410.609 - 7461.022: 93.2513% ( 125) 00:08:55.306 7461.022 - 7511.434: 93.7168% ( 84) 00:08:55.306 7511.434 - 7561.846: 93.9605% ( 44) 00:08:55.306 7561.846 - 7612.258: 94.1434% ( 33) 00:08:55.306 7612.258 - 7662.671: 94.2930% ( 27) 00:08:55.306 7662.671 - 7713.083: 94.4315% ( 25) 00:08:55.306 7713.083 - 7763.495: 94.5645% ( 24) 00:08:55.306 7763.495 - 7813.908: 94.7584% ( 35) 00:08:55.306 7813.908 - 7864.320: 94.9025% ( 26) 00:08:55.306 7864.320 - 7914.732: 95.0798% ( 32) 00:08:55.306 7914.732 - 7965.145: 95.2017% ( 22) 00:08:55.306 7965.145 - 8015.557: 95.3125% ( 20) 00:08:55.306 8015.557 - 8065.969: 95.4178% ( 19) 00:08:55.306 8065.969 - 8116.382: 95.5175% ( 18) 00:08:55.306 8116.382 - 8166.794: 95.6117% ( 17) 00:08:55.306 8166.794 - 8217.206: 95.7004% ( 16) 00:08:55.306 8217.206 - 8267.618: 95.8001% ( 18) 00:08:55.306 8267.618 - 8318.031: 95.8610% ( 11) 00:08:55.306 8318.031 - 8368.443: 95.9275% ( 12) 00:08:55.306 8368.443 - 8418.855: 95.9885% ( 11) 00:08:55.306 8418.855 - 8469.268: 96.0494% ( 11) 00:08:55.306 8469.268 - 8519.680: 96.1104% ( 11) 00:08:55.306 8519.680 - 8570.092: 96.1713% ( 11) 00:08:55.306 8570.092 - 8620.505: 96.2323% ( 11) 00:08:55.306 8620.505 - 8670.917: 96.2932% ( 11) 00:08:55.306 8670.917 - 8721.329: 96.3209% ( 5) 00:08:55.306 8721.329 - 8771.742: 96.3708% ( 9) 00:08:55.306 8771.742 - 8822.154: 96.4040% ( 6) 00:08:55.306 8822.154 - 8872.566: 96.4317% ( 5) 00:08:55.306 8872.566 - 8922.978: 96.4539% ( 4) 00:08:55.306 9275.865 - 9326.277: 96.4927% ( 7) 00:08:55.306 9326.277 - 9376.689: 96.5536% ( 11) 00:08:55.306 9376.689 - 9427.102: 96.6312% ( 14) 00:08:55.306 9427.102 - 9477.514: 96.6645% ( 6) 00:08:55.306 9477.514 - 9527.926: 96.7143% ( 9) 00:08:55.306 9527.926 - 9578.338: 96.7586% ( 8) 00:08:55.306 9578.338 - 9628.751: 96.8085% ( 9) 00:08:55.306 9628.751 - 9679.163: 96.8972% ( 16) 00:08:55.306 9679.163 - 9729.575: 97.0024% ( 19) 00:08:55.306 9729.575 - 9779.988: 97.0745% ( 13) 00:08:55.306 9779.988 - 9830.400: 97.1520% ( 14) 00:08:55.306 9830.400 - 9880.812: 97.2296% ( 14) 00:08:55.306 9880.812 - 9931.225: 97.3238% ( 17) 00:08:55.306 9931.225 - 9981.637: 97.4125% ( 16) 00:08:55.306 9981.637 - 10032.049: 97.4900% ( 14) 00:08:55.306 10032.049 - 10082.462: 97.5731% ( 15) 00:08:55.306 10082.462 - 10132.874: 97.6673% ( 17) 00:08:55.306 10132.874 - 10183.286: 97.7449% ( 14) 00:08:55.306 10183.286 - 10233.698: 97.8114% ( 12) 00:08:55.306 10233.698 - 10284.111: 97.8834% ( 13) 00:08:55.306 10284.111 - 10334.523: 97.9333% ( 9) 00:08:55.306 10334.523 - 10384.935: 97.9887% ( 10) 00:08:55.306 10384.935 - 10435.348: 98.0330% ( 8) 00:08:55.306 10435.348 - 10485.760: 98.0829% ( 9) 00:08:55.306 10485.760 - 10536.172: 98.1328% ( 9) 00:08:55.306 10536.172 - 10586.585: 98.1660% ( 6) 00:08:55.306 10586.585 - 10636.997: 98.1771% ( 2) 00:08:55.306 10636.997 - 10687.409: 98.1826% ( 1) 00:08:55.306 10687.409 - 10737.822: 98.1937% ( 2) 00:08:55.306 10737.822 - 10788.234: 98.2048% ( 2) 00:08:55.306 10788.234 - 10838.646: 98.2159% ( 2) 00:08:55.306 10838.646 - 10889.058: 98.2270% ( 2) 00:08:55.306 11746.068 - 11796.480: 98.2436% ( 3) 00:08:55.306 11796.480 - 11846.892: 98.2491% ( 1) 00:08:55.307 11846.892 - 11897.305: 98.2657% ( 3) 00:08:55.307 11897.305 - 11947.717: 98.2990% ( 6) 00:08:55.307 11947.717 - 11998.129: 98.3267% ( 5) 00:08:55.307 11998.129 - 12048.542: 98.3544% ( 5) 00:08:55.307 12048.542 - 12098.954: 98.3821% ( 5) 00:08:55.307 12098.954 - 12149.366: 98.4098% ( 5) 00:08:55.307 12149.366 - 12199.778: 98.4430% ( 6) 00:08:55.307 12199.778 - 12250.191: 98.4763% ( 6) 00:08:55.307 12250.191 - 12300.603: 98.5040% ( 5) 00:08:55.307 12300.603 - 12351.015: 98.5372% ( 6) 00:08:55.307 12351.015 - 12401.428: 98.5594% ( 4) 00:08:55.307 12401.428 - 12451.840: 98.5926% ( 6) 00:08:55.307 12451.840 - 12502.252: 98.6203% ( 5) 00:08:55.307 12502.252 - 12552.665: 98.6480% ( 5) 00:08:55.307 12552.665 - 12603.077: 98.6758% ( 5) 00:08:55.307 12603.077 - 12653.489: 98.7035% ( 5) 00:08:55.307 12653.489 - 12703.902: 98.7312% ( 5) 00:08:55.307 12703.902 - 12754.314: 98.7589% ( 5) 00:08:55.307 12754.314 - 12804.726: 98.7866% ( 5) 00:08:55.307 12804.726 - 12855.138: 98.8143% ( 5) 00:08:55.307 12855.138 - 12905.551: 98.8254% ( 2) 00:08:55.307 12905.551 - 13006.375: 98.8475% ( 4) 00:08:55.307 13006.375 - 13107.200: 98.8697% ( 4) 00:08:55.307 13107.200 - 13208.025: 98.8918% ( 4) 00:08:55.307 13208.025 - 13308.849: 98.9306% ( 7) 00:08:55.307 13308.849 - 13409.674: 98.9639% ( 6) 00:08:55.307 13409.674 - 13510.498: 98.9916% ( 5) 00:08:55.307 13510.498 - 13611.323: 99.0082% ( 3) 00:08:55.307 13611.323 - 13712.148: 99.0359% ( 5) 00:08:55.307 13712.148 - 13812.972: 99.0581% ( 4) 00:08:55.307 13812.972 - 13913.797: 99.0802% ( 4) 00:08:55.307 13913.797 - 14014.622: 99.0969% ( 3) 00:08:55.307 14014.622 - 14115.446: 99.1190% ( 4) 00:08:55.307 14115.446 - 14216.271: 99.1412% ( 4) 00:08:55.307 14216.271 - 14317.095: 99.1578% ( 3) 00:08:55.307 14317.095 - 14417.920: 99.1800% ( 4) 00:08:55.307 14417.920 - 14518.745: 99.2021% ( 4) 00:08:55.307 14518.745 - 14619.569: 99.2243% ( 4) 00:08:55.307 14619.569 - 14720.394: 99.2465% ( 4) 00:08:55.307 14720.394 - 14821.218: 99.2686% ( 4) 00:08:55.307 14821.218 - 14922.043: 99.2852% ( 3) 00:08:55.307 14922.043 - 15022.868: 99.2908% ( 1) 00:08:55.307 28835.840 - 29037.489: 99.2963% ( 1) 00:08:55.307 29037.489 - 29239.138: 99.3351% ( 7) 00:08:55.307 29239.138 - 29440.788: 99.3794% ( 8) 00:08:55.307 29440.788 - 29642.437: 99.4238% ( 8) 00:08:55.307 29642.437 - 29844.086: 99.4681% ( 8) 00:08:55.307 29844.086 - 30045.735: 99.5124% ( 8) 00:08:55.307 30045.735 - 30247.385: 99.5567% ( 8) 00:08:55.307 30247.385 - 30449.034: 99.5955% ( 7) 00:08:55.307 30449.034 - 30650.683: 99.6343% ( 7) 00:08:55.307 30650.683 - 30852.332: 99.6454% ( 2) 00:08:55.307 33675.422 - 33877.071: 99.6509% ( 1) 00:08:55.307 33877.071 - 34078.720: 99.6953% ( 8) 00:08:55.307 34078.720 - 34280.369: 99.7396% ( 8) 00:08:55.307 34280.369 - 34482.018: 99.7839% ( 8) 00:08:55.307 34482.018 - 34683.668: 99.8282% ( 8) 00:08:55.307 34683.668 - 34885.317: 99.8670% ( 7) 00:08:55.307 34885.317 - 35086.966: 99.9113% ( 8) 00:08:55.307 35086.966 - 35288.615: 99.9557% ( 8) 00:08:55.307 35288.615 - 35490.265: 100.0000% ( 8) 00:08:55.307 00:08:55.307 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.307 ============================================================================== 00:08:55.307 Range in us Cumulative IO count 00:08:55.307 6099.889 - 6125.095: 0.0332% ( 6) 00:08:55.307 6125.095 - 6150.302: 0.1718% ( 25) 00:08:55.307 6150.302 - 6175.508: 0.4488% ( 50) 00:08:55.307 6175.508 - 6200.714: 0.8754% ( 77) 00:08:55.307 6200.714 - 6225.920: 1.6290% ( 136) 00:08:55.307 6225.920 - 6251.126: 2.7593% ( 204) 00:08:55.307 6251.126 - 6276.332: 4.4548% ( 306) 00:08:55.307 6276.332 - 6301.538: 6.3941% ( 350) 00:08:55.307 6301.538 - 6326.745: 8.5993% ( 398) 00:08:55.307 6326.745 - 6351.951: 11.0760% ( 447) 00:08:55.307 6351.951 - 6377.157: 13.5140% ( 440) 00:08:55.307 6377.157 - 6402.363: 15.7137% ( 397) 00:08:55.307 6402.363 - 6427.569: 17.9244% ( 399) 00:08:55.307 6427.569 - 6452.775: 19.9911% ( 373) 00:08:55.307 6452.775 - 6503.188: 24.2409% ( 767) 00:08:55.307 6503.188 - 6553.600: 28.8564% ( 833) 00:08:55.307 6553.600 - 6604.012: 33.5550% ( 848) 00:08:55.307 6604.012 - 6654.425: 38.3422% ( 864) 00:08:55.307 6654.425 - 6704.837: 43.0519% ( 850) 00:08:55.307 6704.837 - 6755.249: 47.6341% ( 827) 00:08:55.307 6755.249 - 6805.662: 52.3382% ( 849) 00:08:55.307 6805.662 - 6856.074: 56.9648% ( 835) 00:08:55.307 6856.074 - 6906.486: 61.6135% ( 839) 00:08:55.307 6906.486 - 6956.898: 66.3287% ( 851) 00:08:55.307 6956.898 - 7007.311: 71.1492% ( 870) 00:08:55.307 7007.311 - 7057.723: 75.8477% ( 848) 00:08:55.307 7057.723 - 7108.135: 80.3358% ( 810) 00:08:55.307 7108.135 - 7158.548: 84.2808% ( 712) 00:08:55.307 7158.548 - 7208.960: 87.1620% ( 520) 00:08:55.307 7208.960 - 7259.372: 88.9738% ( 327) 00:08:55.307 7259.372 - 7309.785: 90.2593% ( 232) 00:08:55.307 7309.785 - 7360.197: 91.3065% ( 189) 00:08:55.307 7360.197 - 7410.609: 92.2374% ( 168) 00:08:55.307 7410.609 - 7461.022: 93.0186% ( 141) 00:08:55.307 7461.022 - 7511.434: 93.5672% ( 99) 00:08:55.307 7511.434 - 7561.846: 93.8664% ( 54) 00:08:55.307 7561.846 - 7612.258: 94.0991% ( 42) 00:08:55.307 7612.258 - 7662.671: 94.2985% ( 36) 00:08:55.307 7662.671 - 7713.083: 94.5091% ( 38) 00:08:55.307 7713.083 - 7763.495: 94.6975% ( 34) 00:08:55.307 7763.495 - 7813.908: 94.8692% ( 31) 00:08:55.307 7813.908 - 7864.320: 95.0355% ( 30) 00:08:55.307 7864.320 - 7914.732: 95.1574% ( 22) 00:08:55.307 7914.732 - 7965.145: 95.2460% ( 16) 00:08:55.307 7965.145 - 8015.557: 95.3402% ( 17) 00:08:55.307 8015.557 - 8065.969: 95.4122% ( 13) 00:08:55.307 8065.969 - 8116.382: 95.4843% ( 13) 00:08:55.307 8116.382 - 8166.794: 95.5729% ( 16) 00:08:55.307 8166.794 - 8217.206: 95.6560% ( 15) 00:08:55.307 8217.206 - 8267.618: 95.7004% ( 8) 00:08:55.307 8267.618 - 8318.031: 95.7281% ( 5) 00:08:55.307 8318.031 - 8368.443: 95.7613% ( 6) 00:08:55.307 8368.443 - 8418.855: 95.8001% ( 7) 00:08:55.307 8418.855 - 8469.268: 95.8389% ( 7) 00:08:55.307 8469.268 - 8519.680: 95.8943% ( 10) 00:08:55.307 8519.680 - 8570.092: 95.9608% ( 12) 00:08:55.307 8570.092 - 8620.505: 96.0106% ( 9) 00:08:55.307 8620.505 - 8670.917: 96.0550% ( 8) 00:08:55.307 8670.917 - 8721.329: 96.0993% ( 8) 00:08:55.307 8721.329 - 8771.742: 96.1547% ( 10) 00:08:55.307 8771.742 - 8822.154: 96.2046% ( 9) 00:08:55.307 8822.154 - 8872.566: 96.2600% ( 10) 00:08:55.307 8872.566 - 8922.978: 96.3265% ( 12) 00:08:55.307 8922.978 - 8973.391: 96.3985% ( 13) 00:08:55.307 8973.391 - 9023.803: 96.4705% ( 13) 00:08:55.307 9023.803 - 9074.215: 96.5592% ( 16) 00:08:55.307 9074.215 - 9124.628: 96.6534% ( 17) 00:08:55.307 9124.628 - 9175.040: 96.7254% ( 13) 00:08:55.307 9175.040 - 9225.452: 96.8030% ( 14) 00:08:55.307 9225.452 - 9275.865: 96.8750% ( 13) 00:08:55.307 9275.865 - 9326.277: 96.9359% ( 11) 00:08:55.307 9326.277 - 9376.689: 97.0080% ( 13) 00:08:55.307 9376.689 - 9427.102: 97.0689% ( 11) 00:08:55.307 9427.102 - 9477.514: 97.1077% ( 7) 00:08:55.307 9477.514 - 9527.926: 97.1354% ( 5) 00:08:55.307 9527.926 - 9578.338: 97.1576% ( 4) 00:08:55.307 9578.338 - 9628.751: 97.1687% ( 2) 00:08:55.307 9628.751 - 9679.163: 97.1908% ( 4) 00:08:55.307 9679.163 - 9729.575: 97.2074% ( 3) 00:08:55.307 9729.575 - 9779.988: 97.2296% ( 4) 00:08:55.307 9779.988 - 9830.400: 97.2462% ( 3) 00:08:55.307 9830.400 - 9880.812: 97.2629% ( 3) 00:08:55.307 9880.812 - 9931.225: 97.2850% ( 4) 00:08:55.307 9931.225 - 9981.637: 97.3626% ( 14) 00:08:55.307 9981.637 - 10032.049: 97.4069% ( 8) 00:08:55.307 10032.049 - 10082.462: 97.4346% ( 5) 00:08:55.307 10082.462 - 10132.874: 97.4679% ( 6) 00:08:55.307 10132.874 - 10183.286: 97.5011% ( 6) 00:08:55.307 10183.286 - 10233.698: 97.5288% ( 5) 00:08:55.307 10233.698 - 10284.111: 97.5676% ( 7) 00:08:55.307 10284.111 - 10334.523: 97.6064% ( 7) 00:08:55.307 10334.523 - 10384.935: 97.6341% ( 5) 00:08:55.307 10384.935 - 10435.348: 97.6784% ( 8) 00:08:55.307 10435.348 - 10485.760: 97.7227% ( 8) 00:08:55.307 10485.760 - 10536.172: 97.7671% ( 8) 00:08:55.307 10536.172 - 10586.585: 97.7948% ( 5) 00:08:55.307 10586.585 - 10636.997: 97.8225% ( 5) 00:08:55.307 10636.997 - 10687.409: 97.8502% ( 5) 00:08:55.307 10687.409 - 10737.822: 97.8779% ( 5) 00:08:55.307 10737.822 - 10788.234: 97.9167% ( 7) 00:08:55.307 10788.234 - 10838.646: 97.9499% ( 6) 00:08:55.307 10838.646 - 10889.058: 97.9887% ( 7) 00:08:55.307 10889.058 - 10939.471: 98.0219% ( 6) 00:08:55.307 10939.471 - 10989.883: 98.0496% ( 5) 00:08:55.307 10989.883 - 11040.295: 98.0773% ( 5) 00:08:55.307 11040.295 - 11090.708: 98.1106% ( 6) 00:08:55.307 11090.708 - 11141.120: 98.1438% ( 6) 00:08:55.307 11141.120 - 11191.532: 98.1826% ( 7) 00:08:55.307 11191.532 - 11241.945: 98.2159% ( 6) 00:08:55.307 11241.945 - 11292.357: 98.2547% ( 7) 00:08:55.307 11292.357 - 11342.769: 98.2934% ( 7) 00:08:55.307 11342.769 - 11393.182: 98.3322% ( 7) 00:08:55.307 11393.182 - 11443.594: 98.3544% ( 4) 00:08:55.307 11443.594 - 11494.006: 98.3710% ( 3) 00:08:55.307 11494.006 - 11544.418: 98.3876% ( 3) 00:08:55.307 11544.418 - 11594.831: 98.4043% ( 3) 00:08:55.307 11594.831 - 11645.243: 98.4264% ( 4) 00:08:55.307 11645.243 - 11695.655: 98.4320% ( 1) 00:08:55.307 11695.655 - 11746.068: 98.4430% ( 2) 00:08:55.307 11746.068 - 11796.480: 98.4597% ( 3) 00:08:55.307 11796.480 - 11846.892: 98.4707% ( 2) 00:08:55.307 11846.892 - 11897.305: 98.4763% ( 1) 00:08:55.307 11897.305 - 11947.717: 98.4874% ( 2) 00:08:55.307 11947.717 - 11998.129: 98.4984% ( 2) 00:08:55.307 11998.129 - 12048.542: 98.5151% ( 3) 00:08:55.307 12048.542 - 12098.954: 98.5262% ( 2) 00:08:55.307 12098.954 - 12149.366: 98.5372% ( 2) 00:08:55.307 12149.366 - 12199.778: 98.5483% ( 2) 00:08:55.307 12199.778 - 12250.191: 98.5649% ( 3) 00:08:55.307 12250.191 - 12300.603: 98.5926% ( 5) 00:08:55.307 12300.603 - 12351.015: 98.6093% ( 3) 00:08:55.307 12351.015 - 12401.428: 98.6148% ( 1) 00:08:55.307 12401.428 - 12451.840: 98.6259% ( 2) 00:08:55.307 12451.840 - 12502.252: 98.6370% ( 2) 00:08:55.307 12502.252 - 12552.665: 98.6480% ( 2) 00:08:55.307 12552.665 - 12603.077: 98.6591% ( 2) 00:08:55.307 12603.077 - 12653.489: 98.6702% ( 2) 00:08:55.307 12653.489 - 12703.902: 98.6758% ( 1) 00:08:55.307 12703.902 - 12754.314: 98.6868% ( 2) 00:08:55.307 12754.314 - 12804.726: 98.6979% ( 2) 00:08:55.307 12804.726 - 12855.138: 98.7090% ( 2) 00:08:55.307 12855.138 - 12905.551: 98.7201% ( 2) 00:08:55.307 12905.551 - 13006.375: 98.7422% ( 4) 00:08:55.307 13006.375 - 13107.200: 98.7755% ( 6) 00:08:55.307 13107.200 - 13208.025: 98.8531% ( 14) 00:08:55.307 13208.025 - 13308.849: 98.8918% ( 7) 00:08:55.307 13308.849 - 13409.674: 98.9306% ( 7) 00:08:55.307 13409.674 - 13510.498: 98.9694% ( 7) 00:08:55.307 13510.498 - 13611.323: 99.0137% ( 8) 00:08:55.307 13611.323 - 13712.148: 99.0525% ( 7) 00:08:55.307 13712.148 - 13812.972: 99.0969% ( 8) 00:08:55.307 13812.972 - 13913.797: 99.1356% ( 7) 00:08:55.307 13913.797 - 14014.622: 99.1578% ( 4) 00:08:55.307 14014.622 - 14115.446: 99.1800% ( 4) 00:08:55.307 14115.446 - 14216.271: 99.2021% ( 4) 00:08:55.307 14216.271 - 14317.095: 99.2188% ( 3) 00:08:55.307 14317.095 - 14417.920: 99.2409% ( 4) 00:08:55.307 14417.920 - 14518.745: 99.2631% ( 4) 00:08:55.307 14518.745 - 14619.569: 99.2852% ( 4) 00:08:55.307 14619.569 - 14720.394: 99.2908% ( 1) 00:08:55.307 27424.295 - 27625.945: 99.3074% ( 3) 00:08:55.307 27625.945 - 27827.594: 99.3517% ( 8) 00:08:55.307 27827.594 - 28029.243: 99.3905% ( 7) 00:08:55.307 28029.243 - 28230.892: 99.4348% ( 8) 00:08:55.307 28230.892 - 28432.542: 99.4736% ( 7) 00:08:55.307 28432.542 - 28634.191: 99.5180% ( 8) 00:08:55.307 28634.191 - 28835.840: 99.5623% ( 8) 00:08:55.307 28835.840 - 29037.489: 99.6066% ( 8) 00:08:55.307 29037.489 - 29239.138: 99.6454% ( 7) 00:08:55.307 32263.877 - 32465.526: 99.6731% ( 5) 00:08:55.307 32465.526 - 32667.175: 99.7174% ( 8) 00:08:55.307 32667.175 - 32868.825: 99.7507% ( 6) 00:08:55.307 32868.825 - 33070.474: 99.7950% ( 8) 00:08:55.307 33070.474 - 33272.123: 99.8393% ( 8) 00:08:55.307 33272.123 - 33473.772: 99.8836% ( 8) 00:08:55.307 33473.772 - 33675.422: 99.9224% ( 7) 00:08:55.307 33675.422 - 33877.071: 99.9668% ( 8) 00:08:55.307 33877.071 - 34078.720: 100.0000% ( 6) 00:08:55.307 00:08:55.307 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.307 ============================================================================== 00:08:55.307 Range in us Cumulative IO count 00:08:55.307 6074.683 - 6099.889: 0.0055% ( 1) 00:08:55.307 6099.889 - 6125.095: 0.0166% ( 2) 00:08:55.307 6125.095 - 6150.302: 0.0887% ( 13) 00:08:55.307 6150.302 - 6175.508: 0.4654% ( 68) 00:08:55.307 6175.508 - 6200.714: 1.0638% ( 108) 00:08:55.307 6200.714 - 6225.920: 1.8395% ( 140) 00:08:55.307 6225.920 - 6251.126: 2.9422% ( 199) 00:08:55.307 6251.126 - 6276.332: 4.4936% ( 280) 00:08:55.307 6276.332 - 6301.538: 6.3165% ( 329) 00:08:55.307 6301.538 - 6326.745: 8.4220% ( 380) 00:08:55.307 6326.745 - 6351.951: 10.6660% ( 405) 00:08:55.307 6351.951 - 6377.157: 12.8989% ( 403) 00:08:55.307 6377.157 - 6402.363: 15.0654% ( 391) 00:08:55.307 6402.363 - 6427.569: 17.3260% ( 408) 00:08:55.307 6427.569 - 6452.775: 19.4481% ( 383) 00:08:55.307 6452.775 - 6503.188: 23.5040% ( 732) 00:08:55.307 6503.188 - 6553.600: 27.9422% ( 801) 00:08:55.307 6553.600 - 6604.012: 32.7737% ( 872) 00:08:55.307 6604.012 - 6654.425: 37.6496% ( 880) 00:08:55.307 6654.425 - 6704.837: 42.4867% ( 873) 00:08:55.307 6704.837 - 6755.249: 47.1853% ( 848) 00:08:55.307 6755.249 - 6805.662: 52.0279% ( 874) 00:08:55.307 6805.662 - 6856.074: 56.7708% ( 856) 00:08:55.307 6856.074 - 6906.486: 61.5747% ( 867) 00:08:55.307 6906.486 - 6956.898: 66.4229% ( 875) 00:08:55.307 6956.898 - 7007.311: 71.3874% ( 896) 00:08:55.307 7007.311 - 7057.723: 76.2910% ( 885) 00:08:55.307 7057.723 - 7108.135: 80.9619% ( 843) 00:08:55.307 7108.135 - 7158.548: 84.9900% ( 727) 00:08:55.307 7158.548 - 7208.960: 87.8546% ( 517) 00:08:55.307 7208.960 - 7259.372: 89.6221% ( 319) 00:08:55.307 7259.372 - 7309.785: 90.7912% ( 211) 00:08:55.307 7309.785 - 7360.197: 91.7055% ( 165) 00:08:55.307 7360.197 - 7410.609: 92.5089% ( 145) 00:08:55.307 7410.609 - 7461.022: 93.2292% ( 130) 00:08:55.307 7461.022 - 7511.434: 93.6780% ( 81) 00:08:55.307 7511.434 - 7561.846: 93.9439% ( 48) 00:08:55.307 7561.846 - 7612.258: 94.1545% ( 38) 00:08:55.307 7612.258 - 7662.671: 94.3096% ( 28) 00:08:55.307 7662.671 - 7713.083: 94.4592% ( 27) 00:08:55.307 7713.083 - 7763.495: 94.6254% ( 30) 00:08:55.307 7763.495 - 7813.908: 94.8027% ( 32) 00:08:55.307 7813.908 - 7864.320: 94.9413% ( 25) 00:08:55.307 7864.320 - 7914.732: 95.0687% ( 23) 00:08:55.307 7914.732 - 7965.145: 95.1574% ( 16) 00:08:55.307 7965.145 - 8015.557: 95.2405% ( 15) 00:08:55.307 8015.557 - 8065.969: 95.3070% ( 12) 00:08:55.307 8065.969 - 8116.382: 95.3845% ( 14) 00:08:55.307 8116.382 - 8166.794: 95.4566% ( 13) 00:08:55.307 8166.794 - 8217.206: 95.5230% ( 12) 00:08:55.307 8217.206 - 8267.618: 95.5840% ( 11) 00:08:55.307 8267.618 - 8318.031: 95.6394% ( 10) 00:08:55.307 8318.031 - 8368.443: 95.6893% ( 9) 00:08:55.307 8368.443 - 8418.855: 95.7558% ( 12) 00:08:55.307 8418.855 - 8469.268: 95.8112% ( 10) 00:08:55.307 8469.268 - 8519.680: 95.8721% ( 11) 00:08:55.307 8519.680 - 8570.092: 95.9220% ( 9) 00:08:55.307 8570.092 - 8620.505: 95.9829% ( 11) 00:08:55.307 8620.505 - 8670.917: 96.0716% ( 16) 00:08:55.307 8670.917 - 8721.329: 96.1492% ( 14) 00:08:55.307 8721.329 - 8771.742: 96.2156% ( 12) 00:08:55.307 8771.742 - 8822.154: 96.2877% ( 13) 00:08:55.307 8822.154 - 8872.566: 96.3486% ( 11) 00:08:55.307 8872.566 - 8922.978: 96.4262% ( 14) 00:08:55.307 8922.978 - 8973.391: 96.4927% ( 12) 00:08:55.307 8973.391 - 9023.803: 96.5592% ( 12) 00:08:55.307 9023.803 - 9074.215: 96.6312% ( 13) 00:08:55.307 9074.215 - 9124.628: 96.6922% ( 11) 00:08:55.307 9124.628 - 9175.040: 96.7476% ( 10) 00:08:55.307 9175.040 - 9225.452: 96.8030% ( 10) 00:08:55.307 9225.452 - 9275.865: 96.8584% ( 10) 00:08:55.307 9275.865 - 9326.277: 96.9193% ( 11) 00:08:55.307 9326.277 - 9376.689: 96.9803% ( 11) 00:08:55.307 9376.689 - 9427.102: 97.0412% ( 11) 00:08:55.307 9427.102 - 9477.514: 97.0911% ( 9) 00:08:55.307 9477.514 - 9527.926: 97.1520% ( 11) 00:08:55.307 9527.926 - 9578.338: 97.2019% ( 9) 00:08:55.307 9578.338 - 9628.751: 97.2407% ( 7) 00:08:55.307 9628.751 - 9679.163: 97.2795% ( 7) 00:08:55.307 9679.163 - 9729.575: 97.3238% ( 8) 00:08:55.307 9729.575 - 9779.988: 97.3626% ( 7) 00:08:55.307 9779.988 - 9830.400: 97.3958% ( 6) 00:08:55.307 9830.400 - 9880.812: 97.4180% ( 4) 00:08:55.307 9880.812 - 9931.225: 97.4402% ( 4) 00:08:55.307 9931.225 - 9981.637: 97.4623% ( 4) 00:08:55.307 9981.637 - 10032.049: 97.4845% ( 4) 00:08:55.307 10032.049 - 10082.462: 97.5066% ( 4) 00:08:55.307 10082.462 - 10132.874: 97.5177% ( 2) 00:08:55.307 10536.172 - 10586.585: 97.5399% ( 4) 00:08:55.307 10586.585 - 10636.997: 97.6008% ( 11) 00:08:55.307 10636.997 - 10687.409: 97.6729% ( 13) 00:08:55.307 10687.409 - 10737.822: 97.7172% ( 8) 00:08:55.307 10737.822 - 10788.234: 97.7394% ( 4) 00:08:55.307 10788.234 - 10838.646: 97.7615% ( 4) 00:08:55.307 10838.646 - 10889.058: 97.8059% ( 8) 00:08:55.307 10889.058 - 10939.471: 97.8502% ( 8) 00:08:55.307 10939.471 - 10989.883: 97.9056% ( 10) 00:08:55.307 10989.883 - 11040.295: 97.9555% ( 9) 00:08:55.307 11040.295 - 11090.708: 98.0164% ( 11) 00:08:55.307 11090.708 - 11141.120: 98.0718% ( 10) 00:08:55.307 11141.120 - 11191.532: 98.1328% ( 11) 00:08:55.307 11191.532 - 11241.945: 98.1771% ( 8) 00:08:55.307 11241.945 - 11292.357: 98.2380% ( 11) 00:08:55.307 11292.357 - 11342.769: 98.2934% ( 10) 00:08:55.307 11342.769 - 11393.182: 98.3488% ( 10) 00:08:55.307 11393.182 - 11443.594: 98.4043% ( 10) 00:08:55.307 11443.594 - 11494.006: 98.4375% ( 6) 00:08:55.307 11494.006 - 11544.418: 98.4597% ( 4) 00:08:55.307 11544.418 - 11594.831: 98.4763% ( 3) 00:08:55.307 11594.831 - 11645.243: 98.4984% ( 4) 00:08:55.307 11645.243 - 11695.655: 98.5151% ( 3) 00:08:55.307 11695.655 - 11746.068: 98.5317% ( 3) 00:08:55.307 11746.068 - 11796.480: 98.5539% ( 4) 00:08:55.307 11796.480 - 11846.892: 98.5705% ( 3) 00:08:55.307 11846.892 - 11897.305: 98.5816% ( 2) 00:08:55.307 12300.603 - 12351.015: 98.5871% ( 1) 00:08:55.307 12351.015 - 12401.428: 98.5982% ( 2) 00:08:55.307 12401.428 - 12451.840: 98.6093% ( 2) 00:08:55.307 12451.840 - 12502.252: 98.6259% ( 3) 00:08:55.307 12502.252 - 12552.665: 98.6370% ( 2) 00:08:55.307 12552.665 - 12603.077: 98.6425% ( 1) 00:08:55.307 12603.077 - 12653.489: 98.6591% ( 3) 00:08:55.307 12653.489 - 12703.902: 98.6702% ( 2) 00:08:55.307 12703.902 - 12754.314: 98.6813% ( 2) 00:08:55.307 12754.314 - 12804.726: 98.6924% ( 2) 00:08:55.307 12804.726 - 12855.138: 98.6979% ( 1) 00:08:55.307 12855.138 - 12905.551: 98.7090% ( 2) 00:08:55.307 12905.551 - 13006.375: 98.7312% ( 4) 00:08:55.308 13006.375 - 13107.200: 98.7755% ( 8) 00:08:55.308 13107.200 - 13208.025: 98.8198% ( 8) 00:08:55.308 13208.025 - 13308.849: 98.8586% ( 7) 00:08:55.308 13308.849 - 13409.674: 98.8974% ( 7) 00:08:55.308 13409.674 - 13510.498: 98.9417% ( 8) 00:08:55.308 13510.498 - 13611.323: 98.9860% ( 8) 00:08:55.308 13611.323 - 13712.148: 99.0304% ( 8) 00:08:55.308 13712.148 - 13812.972: 99.0691% ( 7) 00:08:55.308 13812.972 - 13913.797: 99.1079% ( 7) 00:08:55.308 13913.797 - 14014.622: 99.1467% ( 7) 00:08:55.308 14014.622 - 14115.446: 99.1689% ( 4) 00:08:55.308 14115.446 - 14216.271: 99.1910% ( 4) 00:08:55.308 14216.271 - 14317.095: 99.2132% ( 4) 00:08:55.308 14317.095 - 14417.920: 99.2354% ( 4) 00:08:55.308 14417.920 - 14518.745: 99.2575% ( 4) 00:08:55.308 14518.745 - 14619.569: 99.2797% ( 4) 00:08:55.308 14619.569 - 14720.394: 99.2908% ( 2) 00:08:55.308 25609.452 - 25710.277: 99.3129% ( 4) 00:08:55.308 25710.277 - 25811.102: 99.3351% ( 4) 00:08:55.308 25811.102 - 26012.751: 99.3794% ( 8) 00:08:55.308 26012.751 - 26214.400: 99.4182% ( 7) 00:08:55.308 26214.400 - 26416.049: 99.4625% ( 8) 00:08:55.308 26416.049 - 26617.698: 99.5069% ( 8) 00:08:55.308 26617.698 - 26819.348: 99.5457% ( 7) 00:08:55.308 26819.348 - 27020.997: 99.5900% ( 8) 00:08:55.308 27020.997 - 27222.646: 99.6288% ( 7) 00:08:55.308 27222.646 - 27424.295: 99.6454% ( 3) 00:08:55.308 30247.385 - 30449.034: 99.6565% ( 2) 00:08:55.308 30449.034 - 30650.683: 99.6953% ( 7) 00:08:55.308 30650.683 - 30852.332: 99.7396% ( 8) 00:08:55.308 30852.332 - 31053.982: 99.7839% ( 8) 00:08:55.308 31053.982 - 31255.631: 99.8227% ( 7) 00:08:55.308 31255.631 - 31457.280: 99.8615% ( 7) 00:08:55.308 31457.280 - 31658.929: 99.9058% ( 8) 00:08:55.308 31658.929 - 31860.578: 99.9501% ( 8) 00:08:55.308 31860.578 - 32062.228: 99.9945% ( 8) 00:08:55.308 32062.228 - 32263.877: 100.0000% ( 1) 00:08:55.308 00:08:55.308 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.308 ============================================================================== 00:08:55.308 Range in us Cumulative IO count 00:08:55.308 6099.889 - 6125.095: 0.0554% ( 10) 00:08:55.308 6125.095 - 6150.302: 0.1662% ( 20) 00:08:55.308 6150.302 - 6175.508: 0.4100% ( 44) 00:08:55.308 6175.508 - 6200.714: 0.8145% ( 73) 00:08:55.308 6200.714 - 6225.920: 1.5348% ( 130) 00:08:55.308 6225.920 - 6251.126: 2.6817% ( 207) 00:08:55.308 6251.126 - 6276.332: 4.4215% ( 314) 00:08:55.308 6276.332 - 6301.538: 6.3165% ( 342) 00:08:55.308 6301.538 - 6326.745: 8.3610% ( 369) 00:08:55.308 6326.745 - 6351.951: 10.4998% ( 386) 00:08:55.308 6351.951 - 6377.157: 12.8324% ( 421) 00:08:55.308 6377.157 - 6402.363: 15.3036% ( 446) 00:08:55.308 6402.363 - 6427.569: 17.4535% ( 388) 00:08:55.308 6427.569 - 6452.775: 19.5479% ( 378) 00:08:55.308 6452.775 - 6503.188: 23.6259% ( 736) 00:08:55.308 6503.188 - 6553.600: 28.0142% ( 792) 00:08:55.308 6553.600 - 6604.012: 32.6851% ( 843) 00:08:55.308 6604.012 - 6654.425: 37.6441% ( 895) 00:08:55.308 6654.425 - 6704.837: 42.3980% ( 858) 00:08:55.308 6704.837 - 6755.249: 47.1576% ( 859) 00:08:55.308 6755.249 - 6805.662: 52.0667% ( 886) 00:08:55.308 6805.662 - 6856.074: 56.8983% ( 872) 00:08:55.308 6856.074 - 6906.486: 61.6910% ( 865) 00:08:55.308 6906.486 - 6956.898: 66.5836% ( 883) 00:08:55.308 6956.898 - 7007.311: 71.4871% ( 885) 00:08:55.308 7007.311 - 7057.723: 76.4018% ( 887) 00:08:55.308 7057.723 - 7108.135: 81.0395% ( 837) 00:08:55.308 7108.135 - 7158.548: 85.0621% ( 726) 00:08:55.308 7158.548 - 7208.960: 87.9987% ( 530) 00:08:55.308 7208.960 - 7259.372: 89.8271% ( 330) 00:08:55.308 7259.372 - 7309.785: 90.9242% ( 198) 00:08:55.308 7309.785 - 7360.197: 91.8495% ( 167) 00:08:55.308 7360.197 - 7410.609: 92.6751% ( 149) 00:08:55.308 7410.609 - 7461.022: 93.3344% ( 119) 00:08:55.308 7461.022 - 7511.434: 93.7611% ( 77) 00:08:55.308 7511.434 - 7561.846: 94.0492% ( 52) 00:08:55.308 7561.846 - 7612.258: 94.2708% ( 40) 00:08:55.308 7612.258 - 7662.671: 94.4149% ( 26) 00:08:55.308 7662.671 - 7713.083: 94.5811% ( 30) 00:08:55.308 7713.083 - 7763.495: 94.7252% ( 26) 00:08:55.308 7763.495 - 7813.908: 94.8748% ( 27) 00:08:55.308 7813.908 - 7864.320: 95.0078% ( 24) 00:08:55.308 7864.320 - 7914.732: 95.0909% ( 15) 00:08:55.308 7914.732 - 7965.145: 95.1740% ( 15) 00:08:55.308 7965.145 - 8015.557: 95.2294% ( 10) 00:08:55.308 8015.557 - 8065.969: 95.2959% ( 12) 00:08:55.308 8065.969 - 8116.382: 95.3845% ( 16) 00:08:55.308 8116.382 - 8166.794: 95.5341% ( 27) 00:08:55.308 8166.794 - 8217.206: 95.6228% ( 16) 00:08:55.308 8217.206 - 8267.618: 95.6837% ( 11) 00:08:55.308 8267.618 - 8318.031: 95.7225% ( 7) 00:08:55.308 8318.031 - 8368.443: 95.7668% ( 8) 00:08:55.308 8368.443 - 8418.855: 95.8167% ( 9) 00:08:55.308 8418.855 - 8469.268: 95.8500% ( 6) 00:08:55.308 8469.268 - 8519.680: 95.8887% ( 7) 00:08:55.308 8519.680 - 8570.092: 95.9552% ( 12) 00:08:55.308 8570.092 - 8620.505: 96.0217% ( 12) 00:08:55.308 8620.505 - 8670.917: 96.0827% ( 11) 00:08:55.308 8670.917 - 8721.329: 96.1547% ( 13) 00:08:55.308 8721.329 - 8771.742: 96.2156% ( 11) 00:08:55.308 8771.742 - 8822.154: 96.2711% ( 10) 00:08:55.308 8822.154 - 8872.566: 96.3320% ( 11) 00:08:55.308 8872.566 - 8922.978: 96.4040% ( 13) 00:08:55.308 8922.978 - 8973.391: 96.4705% ( 12) 00:08:55.308 8973.391 - 9023.803: 96.5536% ( 15) 00:08:55.308 9023.803 - 9074.215: 96.6035% ( 9) 00:08:55.308 9074.215 - 9124.628: 96.6534% ( 9) 00:08:55.308 9124.628 - 9175.040: 96.7143% ( 11) 00:08:55.308 9175.040 - 9225.452: 96.7753% ( 11) 00:08:55.308 9225.452 - 9275.865: 96.8418% ( 12) 00:08:55.308 9275.865 - 9326.277: 96.9082% ( 12) 00:08:55.308 9326.277 - 9376.689: 96.9637% ( 10) 00:08:55.308 9376.689 - 9427.102: 97.0135% ( 9) 00:08:55.308 9427.102 - 9477.514: 97.0634% ( 9) 00:08:55.308 9477.514 - 9527.926: 97.1077% ( 8) 00:08:55.308 9527.926 - 9578.338: 97.1465% ( 7) 00:08:55.308 9578.338 - 9628.751: 97.1687% ( 4) 00:08:55.308 9628.751 - 9679.163: 97.1964% ( 5) 00:08:55.308 9679.163 - 9729.575: 97.2185% ( 4) 00:08:55.308 9729.575 - 9779.988: 97.2462% ( 5) 00:08:55.308 9779.988 - 9830.400: 97.2739% ( 5) 00:08:55.308 9830.400 - 9880.812: 97.2961% ( 4) 00:08:55.308 9880.812 - 9931.225: 97.3072% ( 2) 00:08:55.308 9931.225 - 9981.637: 97.3183% ( 2) 00:08:55.308 9981.637 - 10032.049: 97.3293% ( 2) 00:08:55.308 10032.049 - 10082.462: 97.3404% ( 2) 00:08:55.308 10082.462 - 10132.874: 97.3515% ( 2) 00:08:55.308 10132.874 - 10183.286: 97.3626% ( 2) 00:08:55.308 10183.286 - 10233.698: 97.3737% ( 2) 00:08:55.308 10233.698 - 10284.111: 97.3848% ( 2) 00:08:55.308 10284.111 - 10334.523: 97.3958% ( 2) 00:08:55.308 10334.523 - 10384.935: 97.4069% ( 2) 00:08:55.308 10384.935 - 10435.348: 97.4180% ( 2) 00:08:55.308 10435.348 - 10485.760: 97.4291% ( 2) 00:08:55.308 10485.760 - 10536.172: 97.4512% ( 4) 00:08:55.308 10536.172 - 10586.585: 97.4789% ( 5) 00:08:55.308 10586.585 - 10636.997: 97.5066% ( 5) 00:08:55.308 10636.997 - 10687.409: 97.5454% ( 7) 00:08:55.308 10687.409 - 10737.822: 97.5898% ( 8) 00:08:55.308 10737.822 - 10788.234: 97.6341% ( 8) 00:08:55.308 10788.234 - 10838.646: 97.6840% ( 9) 00:08:55.308 10838.646 - 10889.058: 97.7338% ( 9) 00:08:55.308 10889.058 - 10939.471: 97.7671% ( 6) 00:08:55.308 10939.471 - 10989.883: 97.8003% ( 6) 00:08:55.308 10989.883 - 11040.295: 97.8391% ( 7) 00:08:55.308 11040.295 - 11090.708: 97.8723% ( 6) 00:08:55.308 11090.708 - 11141.120: 97.9167% ( 8) 00:08:55.308 11141.120 - 11191.532: 97.9942% ( 14) 00:08:55.308 11191.532 - 11241.945: 98.0386% ( 8) 00:08:55.308 11241.945 - 11292.357: 98.0884% ( 9) 00:08:55.308 11292.357 - 11342.769: 98.1494% ( 11) 00:08:55.308 11342.769 - 11393.182: 98.1992% ( 9) 00:08:55.308 11393.182 - 11443.594: 98.2491% ( 9) 00:08:55.308 11443.594 - 11494.006: 98.2879% ( 7) 00:08:55.308 11494.006 - 11544.418: 98.3211% ( 6) 00:08:55.308 11544.418 - 11594.831: 98.3544% ( 6) 00:08:55.308 11594.831 - 11645.243: 98.3876% ( 6) 00:08:55.308 11645.243 - 11695.655: 98.4541% ( 12) 00:08:55.308 11695.655 - 11746.068: 98.5040% ( 9) 00:08:55.308 11746.068 - 11796.480: 98.5372% ( 6) 00:08:55.308 11796.480 - 11846.892: 98.5594% ( 4) 00:08:55.308 11846.892 - 11897.305: 98.5926% ( 6) 00:08:55.308 11897.305 - 11947.717: 98.6203% ( 5) 00:08:55.308 11947.717 - 11998.129: 98.6536% ( 6) 00:08:55.308 11998.129 - 12048.542: 98.6813% ( 5) 00:08:55.308 12048.542 - 12098.954: 98.6979% ( 3) 00:08:55.308 12098.954 - 12149.366: 98.7090% ( 2) 00:08:55.308 12149.366 - 12199.778: 98.7201% ( 2) 00:08:55.308 12199.778 - 12250.191: 98.7312% ( 2) 00:08:55.308 12250.191 - 12300.603: 98.7422% ( 2) 00:08:55.308 12300.603 - 12351.015: 98.7533% ( 2) 00:08:55.308 12351.015 - 12401.428: 98.7589% ( 1) 00:08:55.308 12401.428 - 12451.840: 98.7699% ( 2) 00:08:55.308 12451.840 - 12502.252: 98.7810% ( 2) 00:08:55.308 12502.252 - 12552.665: 98.7921% ( 2) 00:08:55.308 12552.665 - 12603.077: 98.8032% ( 2) 00:08:55.308 12603.077 - 12653.489: 98.8143% ( 2) 00:08:55.308 12653.489 - 12703.902: 98.8254% ( 2) 00:08:55.308 12703.902 - 12754.314: 98.8364% ( 2) 00:08:55.308 12754.314 - 12804.726: 98.8475% ( 2) 00:08:55.308 12804.726 - 12855.138: 98.8586% ( 2) 00:08:55.308 12855.138 - 12905.551: 98.8641% ( 1) 00:08:55.308 12905.551 - 13006.375: 98.8808% ( 3) 00:08:55.308 13006.375 - 13107.200: 98.8974% ( 3) 00:08:55.308 13107.200 - 13208.025: 98.9195% ( 4) 00:08:55.308 13208.025 - 13308.849: 98.9362% ( 3) 00:08:55.308 13812.972 - 13913.797: 98.9750% ( 7) 00:08:55.308 13913.797 - 14014.622: 98.9971% ( 4) 00:08:55.308 14014.622 - 14115.446: 99.0082% ( 2) 00:08:55.308 14115.446 - 14216.271: 99.0304% ( 4) 00:08:55.308 14216.271 - 14317.095: 99.0525% ( 4) 00:08:55.308 14317.095 - 14417.920: 99.0747% ( 4) 00:08:55.308 14417.920 - 14518.745: 99.0969% ( 4) 00:08:55.308 14518.745 - 14619.569: 99.1190% ( 4) 00:08:55.308 14619.569 - 14720.394: 99.1412% ( 4) 00:08:55.308 14720.394 - 14821.218: 99.1578% ( 3) 00:08:55.308 14821.218 - 14922.043: 99.1800% ( 4) 00:08:55.308 14922.043 - 15022.868: 99.2021% ( 4) 00:08:55.308 15022.868 - 15123.692: 99.2243% ( 4) 00:08:55.308 15123.692 - 15224.517: 99.2465% ( 4) 00:08:55.308 15224.517 - 15325.342: 99.2686% ( 4) 00:08:55.308 15325.342 - 15426.166: 99.2908% ( 4) 00:08:55.308 23592.960 - 23693.785: 99.2963% ( 1) 00:08:55.308 23693.785 - 23794.609: 99.3185% ( 4) 00:08:55.308 23794.609 - 23895.434: 99.3406% ( 4) 00:08:55.308 23895.434 - 23996.258: 99.3628% ( 4) 00:08:55.308 23996.258 - 24097.083: 99.3794% ( 3) 00:08:55.308 24097.083 - 24197.908: 99.4016% ( 4) 00:08:55.308 24197.908 - 24298.732: 99.4182% ( 3) 00:08:55.308 24298.732 - 24399.557: 99.4404% ( 4) 00:08:55.308 24399.557 - 24500.382: 99.4625% ( 4) 00:08:55.308 24500.382 - 24601.206: 99.4847% ( 4) 00:08:55.308 24601.206 - 24702.031: 99.5069% ( 4) 00:08:55.308 24702.031 - 24802.855: 99.5290% ( 4) 00:08:55.308 24802.855 - 24903.680: 99.5457% ( 3) 00:08:55.308 24903.680 - 25004.505: 99.5678% ( 4) 00:08:55.308 25004.505 - 25105.329: 99.5900% ( 4) 00:08:55.308 25105.329 - 25206.154: 99.6121% ( 4) 00:08:55.308 25206.154 - 25306.978: 99.6343% ( 4) 00:08:55.308 25306.978 - 25407.803: 99.6454% ( 2) 00:08:55.308 28432.542 - 28634.191: 99.6842% ( 7) 00:08:55.308 28634.191 - 28835.840: 99.7230% ( 7) 00:08:55.308 28835.840 - 29037.489: 99.7673% ( 8) 00:08:55.308 29037.489 - 29239.138: 99.8061% ( 7) 00:08:55.308 29239.138 - 29440.788: 99.8504% ( 8) 00:08:55.308 29440.788 - 29642.437: 99.8947% ( 8) 00:08:55.308 29642.437 - 29844.086: 99.9335% ( 7) 00:08:55.308 29844.086 - 30045.735: 99.9778% ( 8) 00:08:55.308 30045.735 - 30247.385: 100.0000% ( 4) 00:08:55.308 00:08:55.308 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.308 ============================================================================== 00:08:55.308 Range in us Cumulative IO count 00:08:55.308 6074.683 - 6099.889: 0.0110% ( 2) 00:08:55.308 6099.889 - 6125.095: 0.0718% ( 11) 00:08:55.308 6125.095 - 6150.302: 0.1988% ( 23) 00:08:55.308 6150.302 - 6175.508: 0.4141% ( 39) 00:08:55.308 6175.508 - 6200.714: 0.8282% ( 75) 00:08:55.308 6200.714 - 6225.920: 1.6343% ( 146) 00:08:55.308 6225.920 - 6251.126: 2.8213% ( 215) 00:08:55.308 6251.126 - 6276.332: 4.1906% ( 248) 00:08:55.308 6276.332 - 6301.538: 6.0568% ( 338) 00:08:55.308 6301.538 - 6326.745: 8.1106% ( 372) 00:08:55.308 6326.745 - 6351.951: 10.3909% ( 413) 00:08:55.308 6351.951 - 6377.157: 12.8975% ( 454) 00:08:55.308 6377.157 - 6402.363: 15.0839% ( 396) 00:08:55.308 6402.363 - 6427.569: 17.2261% ( 388) 00:08:55.308 6427.569 - 6452.775: 19.3905% ( 392) 00:08:55.308 6452.775 - 6503.188: 23.6363% ( 769) 00:08:55.308 6503.188 - 6553.600: 27.8655% ( 766) 00:08:55.308 6553.600 - 6604.012: 32.5751% ( 853) 00:08:55.308 6604.012 - 6654.425: 37.4669% ( 886) 00:08:55.308 6654.425 - 6704.837: 42.3807% ( 890) 00:08:55.308 6704.837 - 6755.249: 47.1842% ( 870) 00:08:55.308 6755.249 - 6805.662: 51.9324% ( 860) 00:08:55.308 6805.662 - 6856.074: 56.7800% ( 878) 00:08:55.308 6856.074 - 6906.486: 61.6663% ( 885) 00:08:55.308 6906.486 - 6956.898: 66.5912% ( 892) 00:08:55.308 6956.898 - 7007.311: 71.5216% ( 893) 00:08:55.308 7007.311 - 7057.723: 76.3472% ( 874) 00:08:55.308 7057.723 - 7108.135: 80.9905% ( 841) 00:08:55.308 7108.135 - 7158.548: 85.0486% ( 735) 00:08:55.308 7158.548 - 7208.960: 88.0300% ( 540) 00:08:55.308 7208.960 - 7259.372: 89.7803% ( 317) 00:08:55.308 7259.372 - 7309.785: 90.8735% ( 198) 00:08:55.308 7309.785 - 7360.197: 91.8341% ( 174) 00:08:55.308 7360.197 - 7410.609: 92.6844% ( 154) 00:08:55.308 7410.609 - 7461.022: 93.3635% ( 123) 00:08:55.308 7461.022 - 7511.434: 93.8549% ( 89) 00:08:55.308 7511.434 - 7561.846: 94.1310% ( 50) 00:08:55.308 7561.846 - 7612.258: 94.3408% ( 38) 00:08:55.308 7612.258 - 7662.671: 94.5009% ( 29) 00:08:55.308 7662.671 - 7713.083: 94.6720% ( 31) 00:08:55.308 7713.083 - 7763.495: 94.8266% ( 28) 00:08:55.308 7763.495 - 7813.908: 94.9757% ( 27) 00:08:55.308 7813.908 - 7864.320: 95.0861% ( 20) 00:08:55.308 7864.320 - 7914.732: 95.1966% ( 20) 00:08:55.308 7914.732 - 7965.145: 95.2959% ( 18) 00:08:55.308 7965.145 - 8015.557: 95.3843% ( 16) 00:08:55.308 8015.557 - 8065.969: 95.4505% ( 12) 00:08:55.308 8065.969 - 8116.382: 95.4947% ( 8) 00:08:55.308 8116.382 - 8166.794: 95.5444% ( 9) 00:08:55.308 8166.794 - 8217.206: 95.5830% ( 7) 00:08:55.308 8217.206 - 8267.618: 95.6272% ( 8) 00:08:55.308 8267.618 - 8318.031: 95.6659% ( 7) 00:08:55.308 8318.031 - 8368.443: 95.7211% ( 10) 00:08:55.308 8368.443 - 8418.855: 95.7708% ( 9) 00:08:55.308 8418.855 - 8469.268: 95.8425% ( 13) 00:08:55.308 8469.268 - 8519.680: 95.8977% ( 10) 00:08:55.308 8519.680 - 8570.092: 95.9750% ( 14) 00:08:55.308 8570.092 - 8620.505: 96.0247% ( 9) 00:08:55.308 8620.505 - 8670.917: 96.0855% ( 11) 00:08:55.308 8670.917 - 8721.329: 96.1352% ( 9) 00:08:55.308 8721.329 - 8771.742: 96.1904% ( 10) 00:08:55.308 8771.742 - 8822.154: 96.2401% ( 9) 00:08:55.308 8822.154 - 8872.566: 96.2898% ( 9) 00:08:55.308 8872.566 - 8922.978: 96.3450% ( 10) 00:08:55.308 8922.978 - 8973.391: 96.3891% ( 8) 00:08:55.308 8973.391 - 9023.803: 96.4388% ( 9) 00:08:55.308 9023.803 - 9074.215: 96.4885% ( 9) 00:08:55.308 9074.215 - 9124.628: 96.5382% ( 9) 00:08:55.308 9124.628 - 9175.040: 96.5989% ( 11) 00:08:55.308 9175.040 - 9225.452: 96.6707% ( 13) 00:08:55.308 9225.452 - 9275.865: 96.7425% ( 13) 00:08:55.308 9275.865 - 9326.277: 96.7811% ( 7) 00:08:55.308 9326.277 - 9376.689: 96.8253% ( 8) 00:08:55.308 9376.689 - 9427.102: 96.8750% ( 9) 00:08:55.308 9427.102 - 9477.514: 96.9136% ( 7) 00:08:55.308 9477.514 - 9527.926: 96.9413% ( 5) 00:08:55.308 9527.926 - 9578.338: 96.9854% ( 8) 00:08:55.308 9578.338 - 9628.751: 97.0020% ( 3) 00:08:55.308 9628.751 - 9679.163: 97.0186% ( 3) 00:08:55.308 9679.163 - 9729.575: 97.0517% ( 6) 00:08:55.308 9729.575 - 9779.988: 97.0627% ( 2) 00:08:55.308 9779.988 - 9830.400: 97.0848% ( 4) 00:08:55.308 9830.400 - 9880.812: 97.1014% ( 3) 00:08:55.308 9880.812 - 9931.225: 97.1179% ( 3) 00:08:55.308 9931.225 - 9981.637: 97.1345% ( 3) 00:08:55.308 9981.637 - 10032.049: 97.1566% ( 4) 00:08:55.308 10032.049 - 10082.462: 97.1897% ( 6) 00:08:55.308 10082.462 - 10132.874: 97.2063% ( 3) 00:08:55.309 10132.874 - 10183.286: 97.2284% ( 4) 00:08:55.309 10183.286 - 10233.698: 97.2615% ( 6) 00:08:55.309 10233.698 - 10284.111: 97.2946% ( 6) 00:08:55.309 10284.111 - 10334.523: 97.3498% ( 10) 00:08:55.309 10334.523 - 10384.935: 97.3995% ( 9) 00:08:55.309 10384.935 - 10435.348: 97.4823% ( 15) 00:08:55.309 10435.348 - 10485.760: 97.5210% ( 7) 00:08:55.309 10485.760 - 10536.172: 97.5707% ( 9) 00:08:55.309 10536.172 - 10586.585: 97.6093% ( 7) 00:08:55.309 10586.585 - 10636.997: 97.6480% ( 7) 00:08:55.309 10636.997 - 10687.409: 97.6921% ( 8) 00:08:55.309 10687.409 - 10737.822: 97.7308% ( 7) 00:08:55.309 10737.822 - 10788.234: 97.7750% ( 8) 00:08:55.309 10788.234 - 10838.646: 97.8246% ( 9) 00:08:55.309 10838.646 - 10889.058: 97.8799% ( 10) 00:08:55.309 10889.058 - 10939.471: 97.9516% ( 13) 00:08:55.309 10939.471 - 10989.883: 98.0068% ( 10) 00:08:55.309 10989.883 - 11040.295: 98.0621% ( 10) 00:08:55.309 11040.295 - 11090.708: 98.1007% ( 7) 00:08:55.309 11090.708 - 11141.120: 98.1394% ( 7) 00:08:55.309 11141.120 - 11191.532: 98.1725% ( 6) 00:08:55.309 11191.532 - 11241.945: 98.2167% ( 8) 00:08:55.309 11241.945 - 11292.357: 98.2443% ( 5) 00:08:55.309 11292.357 - 11342.769: 98.2608% ( 3) 00:08:55.309 11342.769 - 11393.182: 98.2774% ( 3) 00:08:55.309 11393.182 - 11443.594: 98.2995% ( 4) 00:08:55.309 11443.594 - 11494.006: 98.3216% ( 4) 00:08:55.309 11494.006 - 11544.418: 98.3436% ( 4) 00:08:55.309 11544.418 - 11594.831: 98.3657% ( 4) 00:08:55.309 11594.831 - 11645.243: 98.4044% ( 7) 00:08:55.309 11645.243 - 11695.655: 98.4706% ( 12) 00:08:55.309 11695.655 - 11746.068: 98.4982% ( 5) 00:08:55.309 11746.068 - 11796.480: 98.5203% ( 4) 00:08:55.309 11796.480 - 11846.892: 98.5534% ( 6) 00:08:55.309 11846.892 - 11897.305: 98.5866% ( 6) 00:08:55.309 11897.305 - 11947.717: 98.6142% ( 5) 00:08:55.309 11947.717 - 11998.129: 98.6418% ( 5) 00:08:55.309 11998.129 - 12048.542: 98.6694% ( 5) 00:08:55.309 12048.542 - 12098.954: 98.6970% ( 5) 00:08:55.309 12098.954 - 12149.366: 98.7246% ( 5) 00:08:55.309 12149.366 - 12199.778: 98.7577% ( 6) 00:08:55.309 12199.778 - 12250.191: 98.7798% ( 4) 00:08:55.309 12250.191 - 12300.603: 98.8074% ( 5) 00:08:55.309 12300.603 - 12351.015: 98.8405% ( 6) 00:08:55.309 12351.015 - 12401.428: 98.8682% ( 5) 00:08:55.309 12401.428 - 12451.840: 98.8958% ( 5) 00:08:55.309 12451.840 - 12502.252: 98.9234% ( 5) 00:08:55.309 12502.252 - 12552.665: 98.9399% ( 3) 00:08:55.309 14518.745 - 14619.569: 98.9565% ( 3) 00:08:55.309 14619.569 - 14720.394: 98.9896% ( 6) 00:08:55.309 14720.394 - 14821.218: 99.0007% ( 2) 00:08:55.309 14821.218 - 14922.043: 99.0117% ( 2) 00:08:55.309 14922.043 - 15022.868: 99.0338% ( 4) 00:08:55.309 15022.868 - 15123.692: 99.0559% ( 4) 00:08:55.309 15123.692 - 15224.517: 99.0780% ( 4) 00:08:55.309 15224.517 - 15325.342: 99.1000% ( 4) 00:08:55.309 15325.342 - 15426.166: 99.1221% ( 4) 00:08:55.309 15426.166 - 15526.991: 99.1442% ( 4) 00:08:55.309 15526.991 - 15627.815: 99.1663% ( 4) 00:08:55.309 15627.815 - 15728.640: 99.1884% ( 4) 00:08:55.309 15728.640 - 15829.465: 99.2160% ( 5) 00:08:55.309 15829.465 - 15930.289: 99.2546% ( 7) 00:08:55.309 15930.289 - 16031.114: 99.2933% ( 7) 00:08:55.309 18148.431 - 18249.255: 99.3154% ( 4) 00:08:55.309 18249.255 - 18350.080: 99.3375% ( 4) 00:08:55.309 18350.080 - 18450.905: 99.3595% ( 4) 00:08:55.309 18450.905 - 18551.729: 99.3761% ( 3) 00:08:55.309 18551.729 - 18652.554: 99.3982% ( 4) 00:08:55.309 18652.554 - 18753.378: 99.4203% ( 4) 00:08:55.309 18753.378 - 18854.203: 99.4424% ( 4) 00:08:55.309 18854.203 - 18955.028: 99.4644% ( 4) 00:08:55.309 18955.028 - 19055.852: 99.4865% ( 4) 00:08:55.309 19055.852 - 19156.677: 99.5086% ( 4) 00:08:55.309 19156.677 - 19257.502: 99.5307% ( 4) 00:08:55.309 19257.502 - 19358.326: 99.5473% ( 3) 00:08:55.309 19358.326 - 19459.151: 99.5693% ( 4) 00:08:55.309 19459.151 - 19559.975: 99.5859% ( 3) 00:08:55.309 19559.975 - 19660.800: 99.6080% ( 4) 00:08:55.309 19660.800 - 19761.625: 99.6301% ( 4) 00:08:55.309 19761.625 - 19862.449: 99.6466% ( 3) 00:08:55.309 22988.012 - 23088.837: 99.6687% ( 4) 00:08:55.309 23088.837 - 23189.662: 99.6908% ( 4) 00:08:55.309 23189.662 - 23290.486: 99.7129% ( 4) 00:08:55.309 23290.486 - 23391.311: 99.7295% ( 3) 00:08:55.309 23391.311 - 23492.135: 99.7515% ( 4) 00:08:55.309 23492.135 - 23592.960: 99.7736% ( 4) 00:08:55.309 23592.960 - 23693.785: 99.7957% ( 4) 00:08:55.309 23693.785 - 23794.609: 99.8178% ( 4) 00:08:55.309 23794.609 - 23895.434: 99.8344% ( 3) 00:08:55.309 23895.434 - 23996.258: 99.8564% ( 4) 00:08:55.309 23996.258 - 24097.083: 99.8730% ( 3) 00:08:55.309 24097.083 - 24197.908: 99.8951% ( 4) 00:08:55.309 24197.908 - 24298.732: 99.9172% ( 4) 00:08:55.309 24298.732 - 24399.557: 99.9393% ( 4) 00:08:55.309 24399.557 - 24500.382: 99.9614% ( 4) 00:08:55.309 24500.382 - 24601.206: 99.9834% ( 4) 00:08:55.309 24601.206 - 24702.031: 100.0000% ( 3) 00:08:55.309 00:08:55.309 14:41:33 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:56.244 Initializing NVMe Controllers 00:08:56.244 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.244 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.244 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.244 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.244 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:56.244 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:56.244 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:56.244 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:56.244 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:56.244 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:56.244 Initialization complete. Launching workers. 00:08:56.244 ======================================================== 00:08:56.244 Latency(us) 00:08:56.244 Device Information : IOPS MiB/s Average min max 00:08:56.244 PCIE (0000:00:10.0) NSID 1 from core 0: 17262.86 202.30 7425.18 5657.71 32993.12 00:08:56.244 PCIE (0000:00:11.0) NSID 1 from core 0: 17262.86 202.30 7412.53 5730.66 31015.73 00:08:56.244 PCIE (0000:00:13.0) NSID 1 from core 0: 17262.86 202.30 7399.69 5752.34 29160.77 00:08:56.244 PCIE (0000:00:12.0) NSID 1 from core 0: 17262.86 202.30 7386.86 5664.07 27189.04 00:08:56.244 PCIE (0000:00:12.0) NSID 2 from core 0: 17262.86 202.30 7374.09 5644.68 25219.52 00:08:56.244 PCIE (0000:00:12.0) NSID 3 from core 0: 17326.79 203.05 7334.14 5638.96 19737.89 00:08:56.244 ======================================================== 00:08:56.244 Total : 103641.08 1214.54 7388.71 5638.96 32993.12 00:08:56.244 00:08:56.244 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.244 ================================================================================= 00:08:56.244 1.00000% : 5923.446us 00:08:56.244 10.00000% : 6225.920us 00:08:56.244 25.00000% : 6503.188us 00:08:56.244 50.00000% : 6805.662us 00:08:56.244 75.00000% : 7309.785us 00:08:56.244 90.00000% : 8670.917us 00:08:56.244 95.00000% : 12098.954us 00:08:56.244 98.00000% : 14014.622us 00:08:56.244 99.00000% : 15325.342us 00:08:56.244 99.50000% : 27020.997us 00:08:56.244 99.90000% : 32667.175us 00:08:56.244 99.99000% : 33070.474us 00:08:56.244 99.99900% : 33070.474us 00:08:56.244 99.99990% : 33070.474us 00:08:56.244 99.99999% : 33070.474us 00:08:56.244 00:08:56.244 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.244 ================================================================================= 00:08:56.244 1.00000% : 6074.683us 00:08:56.244 10.00000% : 6276.332us 00:08:56.244 25.00000% : 6503.188us 00:08:56.244 50.00000% : 6805.662us 00:08:56.244 75.00000% : 7259.372us 00:08:56.244 90.00000% : 8822.154us 00:08:56.244 95.00000% : 12351.015us 00:08:56.244 98.00000% : 14115.446us 00:08:56.244 99.00000% : 15627.815us 00:08:56.244 99.50000% : 25306.978us 00:08:56.244 99.90000% : 30650.683us 00:08:56.244 99.99000% : 31053.982us 00:08:56.244 99.99900% : 31053.982us 00:08:56.244 99.99990% : 31053.982us 00:08:56.244 99.99999% : 31053.982us 00:08:56.244 00:08:56.244 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.244 ================================================================================= 00:08:56.244 1.00000% : 6049.477us 00:08:56.244 10.00000% : 6251.126us 00:08:56.244 25.00000% : 6452.775us 00:08:56.244 50.00000% : 6805.662us 00:08:56.244 75.00000% : 7259.372us 00:08:56.244 90.00000% : 9074.215us 00:08:56.244 95.00000% : 11846.892us 00:08:56.244 98.00000% : 14014.622us 00:08:56.244 99.00000% : 15325.342us 00:08:56.244 99.50000% : 23492.135us 00:08:56.244 99.90000% : 28835.840us 00:08:56.244 99.99000% : 29239.138us 00:08:56.244 99.99900% : 29239.138us 00:08:56.244 99.99990% : 29239.138us 00:08:56.244 99.99999% : 29239.138us 00:08:56.244 00:08:56.244 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.244 ================================================================================= 00:08:56.244 1.00000% : 6049.477us 00:08:56.244 10.00000% : 6251.126us 00:08:56.244 25.00000% : 6503.188us 00:08:56.244 50.00000% : 6805.662us 00:08:56.244 75.00000% : 7208.960us 00:08:56.244 90.00000% : 9225.452us 00:08:56.244 95.00000% : 11746.068us 00:08:56.244 98.00000% : 14115.446us 00:08:56.244 99.00000% : 15325.342us 00:08:56.244 99.50000% : 21778.117us 00:08:56.244 99.90000% : 26819.348us 00:08:56.244 99.99000% : 27222.646us 00:08:56.244 99.99900% : 27222.646us 00:08:56.244 99.99990% : 27222.646us 00:08:56.244 99.99999% : 27222.646us 00:08:56.244 00:08:56.244 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.244 ================================================================================= 00:08:56.244 1.00000% : 6049.477us 00:08:56.244 10.00000% : 6251.126us 00:08:56.244 25.00000% : 6503.188us 00:08:56.244 50.00000% : 6805.662us 00:08:56.244 75.00000% : 7208.960us 00:08:56.244 90.00000% : 9175.040us 00:08:56.244 95.00000% : 12199.778us 00:08:56.244 98.00000% : 14619.569us 00:08:56.244 99.00000% : 15224.517us 00:08:56.244 99.50000% : 19761.625us 00:08:56.244 99.90000% : 24802.855us 00:08:56.244 99.99000% : 25206.154us 00:08:56.244 99.99900% : 25306.978us 00:08:56.244 99.99990% : 25306.978us 00:08:56.244 99.99999% : 25306.978us 00:08:56.244 00:08:56.244 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.244 ================================================================================= 00:08:56.244 1.00000% : 6074.683us 00:08:56.244 10.00000% : 6276.332us 00:08:56.245 25.00000% : 6503.188us 00:08:56.245 50.00000% : 6805.662us 00:08:56.245 75.00000% : 7259.372us 00:08:56.245 90.00000% : 9023.803us 00:08:56.245 95.00000% : 11998.129us 00:08:56.245 98.00000% : 14014.622us 00:08:56.245 99.00000% : 15022.868us 00:08:56.245 99.50000% : 15325.342us 00:08:56.245 99.90000% : 19358.326us 00:08:56.245 99.99000% : 19761.625us 00:08:56.245 99.99900% : 19761.625us 00:08:56.245 99.99990% : 19761.625us 00:08:56.245 99.99999% : 19761.625us 00:08:56.245 00:08:56.245 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.245 ============================================================================== 00:08:56.245 Range in us Cumulative IO count 00:08:56.245 5646.178 - 5671.385: 0.0058% ( 1) 00:08:56.245 5671.385 - 5696.591: 0.0579% ( 9) 00:08:56.245 5696.591 - 5721.797: 0.0810% ( 4) 00:08:56.245 5721.797 - 5747.003: 0.1042% ( 4) 00:08:56.245 5747.003 - 5772.209: 0.1678% ( 11) 00:08:56.245 5772.209 - 5797.415: 0.2373% ( 12) 00:08:56.245 5797.415 - 5822.622: 0.3183% ( 14) 00:08:56.245 5822.622 - 5847.828: 0.4630% ( 25) 00:08:56.245 5847.828 - 5873.034: 0.5845% ( 21) 00:08:56.245 5873.034 - 5898.240: 0.7928% ( 36) 00:08:56.245 5898.240 - 5923.446: 1.1053% ( 54) 00:08:56.245 5923.446 - 5948.652: 1.5683% ( 80) 00:08:56.245 5948.652 - 5973.858: 2.0370% ( 81) 00:08:56.245 5973.858 - 5999.065: 2.5463% ( 88) 00:08:56.245 5999.065 - 6024.271: 3.1250% ( 100) 00:08:56.245 6024.271 - 6049.477: 3.7731% ( 112) 00:08:56.245 6049.477 - 6074.683: 4.6701% ( 155) 00:08:56.245 6074.683 - 6099.889: 5.4456% ( 134) 00:08:56.245 6099.889 - 6125.095: 6.3947% ( 164) 00:08:56.245 6125.095 - 6150.302: 7.2049% ( 140) 00:08:56.245 6150.302 - 6175.508: 8.1134% ( 157) 00:08:56.245 6175.508 - 6200.714: 9.1898% ( 186) 00:08:56.245 6200.714 - 6225.920: 10.3125% ( 194) 00:08:56.245 6225.920 - 6251.126: 11.5336% ( 211) 00:08:56.245 6251.126 - 6276.332: 12.7720% ( 214) 00:08:56.245 6276.332 - 6301.538: 14.0972% ( 229) 00:08:56.245 6301.538 - 6326.745: 15.4919% ( 241) 00:08:56.245 6326.745 - 6351.951: 17.1412% ( 285) 00:08:56.245 6351.951 - 6377.157: 18.6343% ( 258) 00:08:56.245 6377.157 - 6402.363: 20.5498% ( 331) 00:08:56.245 6402.363 - 6427.569: 22.4479% ( 328) 00:08:56.245 6427.569 - 6452.775: 24.5833% ( 369) 00:08:56.245 6452.775 - 6503.188: 28.9699% ( 758) 00:08:56.245 6503.188 - 6553.600: 33.2755% ( 744) 00:08:56.245 6553.600 - 6604.012: 37.0023% ( 644) 00:08:56.245 6604.012 - 6654.425: 40.6192% ( 625) 00:08:56.245 6654.425 - 6704.837: 44.3287% ( 641) 00:08:56.245 6704.837 - 6755.249: 48.2986% ( 686) 00:08:56.245 6755.249 - 6805.662: 51.9965% ( 639) 00:08:56.245 6805.662 - 6856.074: 55.3762% ( 584) 00:08:56.245 6856.074 - 6906.486: 58.2697% ( 500) 00:08:56.245 6906.486 - 6956.898: 61.0243% ( 476) 00:08:56.245 6956.898 - 7007.311: 63.7442% ( 470) 00:08:56.245 7007.311 - 7057.723: 66.2731% ( 437) 00:08:56.245 7057.723 - 7108.135: 68.5995% ( 402) 00:08:56.245 7108.135 - 7158.548: 70.9201% ( 401) 00:08:56.245 7158.548 - 7208.960: 73.0729% ( 372) 00:08:56.245 7208.960 - 7259.372: 74.8843% ( 313) 00:08:56.245 7259.372 - 7309.785: 76.5799% ( 293) 00:08:56.245 7309.785 - 7360.197: 78.1192% ( 266) 00:08:56.245 7360.197 - 7410.609: 79.5081% ( 240) 00:08:56.245 7410.609 - 7461.022: 80.7292% ( 211) 00:08:56.245 7461.022 - 7511.434: 81.7245% ( 172) 00:08:56.245 7511.434 - 7561.846: 82.5579% ( 144) 00:08:56.245 7561.846 - 7612.258: 83.2986% ( 128) 00:08:56.245 7612.258 - 7662.671: 83.7731% ( 82) 00:08:56.245 7662.671 - 7713.083: 84.3866% ( 106) 00:08:56.245 7713.083 - 7763.495: 84.8958% ( 88) 00:08:56.245 7763.495 - 7813.908: 85.3414% ( 77) 00:08:56.245 7813.908 - 7864.320: 85.8391% ( 86) 00:08:56.245 7864.320 - 7914.732: 86.2326% ( 68) 00:08:56.245 7914.732 - 7965.145: 86.5509% ( 55) 00:08:56.245 7965.145 - 8015.557: 86.8924% ( 59) 00:08:56.245 8015.557 - 8065.969: 87.3032% ( 71) 00:08:56.245 8065.969 - 8116.382: 87.6331% ( 57) 00:08:56.245 8116.382 - 8166.794: 87.8877% ( 44) 00:08:56.245 8166.794 - 8217.206: 88.1539% ( 46) 00:08:56.245 8217.206 - 8267.618: 88.4549% ( 52) 00:08:56.245 8267.618 - 8318.031: 88.7037% ( 43) 00:08:56.245 8318.031 - 8368.443: 88.8542% ( 26) 00:08:56.245 8368.443 - 8418.855: 89.1146% ( 45) 00:08:56.245 8418.855 - 8469.268: 89.2419% ( 22) 00:08:56.245 8469.268 - 8519.680: 89.4329% ( 33) 00:08:56.245 8519.680 - 8570.092: 89.6238% ( 33) 00:08:56.245 8570.092 - 8620.505: 89.8206% ( 34) 00:08:56.245 8620.505 - 8670.917: 90.0347% ( 37) 00:08:56.245 8670.917 - 8721.329: 90.1736% ( 24) 00:08:56.245 8721.329 - 8771.742: 90.3530% ( 31) 00:08:56.245 8771.742 - 8822.154: 90.4803% ( 22) 00:08:56.245 8822.154 - 8872.566: 90.5903% ( 19) 00:08:56.245 8872.566 - 8922.978: 90.7639% ( 30) 00:08:56.245 8922.978 - 8973.391: 90.8796% ( 20) 00:08:56.245 8973.391 - 9023.803: 90.9722% ( 16) 00:08:56.245 9023.803 - 9074.215: 91.0822% ( 19) 00:08:56.245 9074.215 - 9124.628: 91.1863% ( 18) 00:08:56.245 9124.628 - 9175.040: 91.2558% ( 12) 00:08:56.245 9175.040 - 9225.452: 91.3079% ( 9) 00:08:56.245 9225.452 - 9275.865: 91.4005% ( 16) 00:08:56.245 9275.865 - 9326.277: 91.4815% ( 14) 00:08:56.245 9326.277 - 9376.689: 91.5336% ( 9) 00:08:56.245 9376.689 - 9427.102: 91.5683% ( 6) 00:08:56.245 9427.102 - 9477.514: 91.6146% ( 8) 00:08:56.245 9477.514 - 9527.926: 91.6609% ( 8) 00:08:56.245 9527.926 - 9578.338: 91.7188% ( 10) 00:08:56.245 9578.338 - 9628.751: 91.7593% ( 7) 00:08:56.245 9628.751 - 9679.163: 91.8576% ( 17) 00:08:56.245 9679.163 - 9729.575: 91.9329% ( 13) 00:08:56.245 9729.575 - 9779.988: 92.0023% ( 12) 00:08:56.245 9779.988 - 9830.400: 92.0544% ( 9) 00:08:56.245 9830.400 - 9880.812: 92.0949% ( 7) 00:08:56.245 9880.812 - 9931.225: 92.1528% ( 10) 00:08:56.245 9931.225 - 9981.637: 92.2627% ( 19) 00:08:56.245 9981.637 - 10032.049: 92.3032% ( 7) 00:08:56.245 10032.049 - 10082.462: 92.3322% ( 5) 00:08:56.245 10082.462 - 10132.874: 92.3669% ( 6) 00:08:56.245 10132.874 - 10183.286: 92.3900% ( 4) 00:08:56.245 10183.286 - 10233.698: 92.4016% ( 2) 00:08:56.245 10233.698 - 10284.111: 92.4074% ( 1) 00:08:56.245 10284.111 - 10334.523: 92.4248% ( 3) 00:08:56.245 10334.523 - 10384.935: 92.4479% ( 4) 00:08:56.245 10384.935 - 10435.348: 92.4769% ( 5) 00:08:56.245 10435.348 - 10485.760: 92.5174% ( 7) 00:08:56.245 10485.760 - 10536.172: 92.5579% ( 7) 00:08:56.245 10536.172 - 10586.585: 92.5984% ( 7) 00:08:56.245 10586.585 - 10636.997: 92.6157% ( 3) 00:08:56.245 10636.997 - 10687.409: 92.6331% ( 3) 00:08:56.245 10687.409 - 10737.822: 92.6794% ( 8) 00:08:56.245 10737.822 - 10788.234: 92.7141% ( 6) 00:08:56.245 10788.234 - 10838.646: 92.7662% ( 9) 00:08:56.245 10838.646 - 10889.058: 92.8588% ( 16) 00:08:56.245 10889.058 - 10939.471: 92.9340% ( 13) 00:08:56.245 10939.471 - 10989.883: 92.9977% ( 11) 00:08:56.245 10989.883 - 11040.295: 93.1019% ( 18) 00:08:56.245 11040.295 - 11090.708: 93.2350% ( 23) 00:08:56.245 11090.708 - 11141.120: 93.3160% ( 14) 00:08:56.245 11141.120 - 11191.532: 93.4028% ( 15) 00:08:56.245 11191.532 - 11241.945: 93.4838% ( 14) 00:08:56.245 11241.945 - 11292.357: 93.5417% ( 10) 00:08:56.245 11292.357 - 11342.769: 93.6285% ( 15) 00:08:56.245 11342.769 - 11393.182: 93.7153% ( 15) 00:08:56.245 11393.182 - 11443.594: 93.7731% ( 10) 00:08:56.245 11443.594 - 11494.006: 93.8773% ( 18) 00:08:56.245 11494.006 - 11544.418: 93.9294% ( 9) 00:08:56.245 11544.418 - 11594.831: 94.0162% ( 15) 00:08:56.245 11594.831 - 11645.243: 94.1146% ( 17) 00:08:56.245 11645.243 - 11695.655: 94.1956% ( 14) 00:08:56.245 11695.655 - 11746.068: 94.2766% ( 14) 00:08:56.245 11746.068 - 11796.480: 94.3634% ( 15) 00:08:56.245 11796.480 - 11846.892: 94.4560% ( 16) 00:08:56.245 11846.892 - 11897.305: 94.5660% ( 19) 00:08:56.245 11897.305 - 11947.717: 94.7396% ( 30) 00:08:56.245 11947.717 - 11998.129: 94.8380% ( 17) 00:08:56.245 11998.129 - 12048.542: 94.9653% ( 22) 00:08:56.245 12048.542 - 12098.954: 95.0984% ( 23) 00:08:56.245 12098.954 - 12149.366: 95.1852% ( 15) 00:08:56.245 12149.366 - 12199.778: 95.2894% ( 18) 00:08:56.245 12199.778 - 12250.191: 95.3993% ( 19) 00:08:56.245 12250.191 - 12300.603: 95.5150% ( 20) 00:08:56.245 12300.603 - 12351.015: 95.6019% ( 15) 00:08:56.245 12351.015 - 12401.428: 95.6829% ( 14) 00:08:56.245 12401.428 - 12451.840: 95.7986% ( 20) 00:08:56.245 12451.840 - 12502.252: 95.8854% ( 15) 00:08:56.245 12502.252 - 12552.665: 95.9838% ( 17) 00:08:56.245 12552.665 - 12603.077: 96.0417% ( 10) 00:08:56.245 12603.077 - 12653.489: 96.1285% ( 15) 00:08:56.245 12653.489 - 12703.902: 96.2211% ( 16) 00:08:56.245 12703.902 - 12754.314: 96.2847% ( 11) 00:08:56.245 12754.314 - 12804.726: 96.3889% ( 18) 00:08:56.245 12804.726 - 12855.138: 96.5046% ( 20) 00:08:56.245 12855.138 - 12905.551: 96.5625% ( 10) 00:08:56.245 12905.551 - 13006.375: 96.6956% ( 23) 00:08:56.245 13006.375 - 13107.200: 96.8345% ( 24) 00:08:56.245 13107.200 - 13208.025: 96.9965% ( 28) 00:08:56.245 13208.025 - 13308.849: 97.1181% ( 21) 00:08:56.245 13308.849 - 13409.674: 97.2627% ( 25) 00:08:56.245 13409.674 - 13510.498: 97.4711% ( 36) 00:08:56.245 13510.498 - 13611.323: 97.6736% ( 35) 00:08:56.245 13611.323 - 13712.148: 97.8183% ( 25) 00:08:56.245 13712.148 - 13812.972: 97.9109% ( 16) 00:08:56.245 13812.972 - 13913.797: 97.9803% ( 12) 00:08:56.245 13913.797 - 14014.622: 98.0382% ( 10) 00:08:56.245 14014.622 - 14115.446: 98.0787% ( 7) 00:08:56.245 14115.446 - 14216.271: 98.1134% ( 6) 00:08:56.245 14216.271 - 14317.095: 98.1597% ( 8) 00:08:56.245 14317.095 - 14417.920: 98.1887% ( 5) 00:08:56.245 14417.920 - 14518.745: 98.2292% ( 7) 00:08:56.245 14518.745 - 14619.569: 98.2639% ( 6) 00:08:56.245 14619.569 - 14720.394: 98.3160% ( 9) 00:08:56.245 14720.394 - 14821.218: 98.5127% ( 34) 00:08:56.246 14821.218 - 14922.043: 98.6400% ( 22) 00:08:56.246 14922.043 - 15022.868: 98.7616% ( 21) 00:08:56.246 15022.868 - 15123.692: 98.8947% ( 23) 00:08:56.246 15123.692 - 15224.517: 98.9641% ( 12) 00:08:56.246 15224.517 - 15325.342: 99.0567% ( 16) 00:08:56.246 15325.342 - 15426.166: 99.1377% ( 14) 00:08:56.246 15426.166 - 15526.991: 99.1609% ( 4) 00:08:56.246 15526.991 - 15627.815: 99.1725% ( 2) 00:08:56.246 15627.815 - 15728.640: 99.1898% ( 3) 00:08:56.246 15728.640 - 15829.465: 99.2130% ( 4) 00:08:56.246 15829.465 - 15930.289: 99.2245% ( 2) 00:08:56.246 15930.289 - 16031.114: 99.2477% ( 4) 00:08:56.246 16031.114 - 16131.938: 99.2593% ( 2) 00:08:56.246 26214.400 - 26416.049: 99.2824% ( 4) 00:08:56.246 26416.049 - 26617.698: 99.3345% ( 9) 00:08:56.246 26617.698 - 26819.348: 99.4444% ( 19) 00:08:56.246 26819.348 - 27020.997: 99.5081% ( 11) 00:08:56.246 27020.997 - 27222.646: 99.5312% ( 4) 00:08:56.246 27222.646 - 27424.295: 99.5775% ( 8) 00:08:56.246 27424.295 - 27625.945: 99.5949% ( 3) 00:08:56.246 27625.945 - 27827.594: 99.6238% ( 5) 00:08:56.246 27827.594 - 28029.243: 99.6296% ( 1) 00:08:56.246 31053.982 - 31255.631: 99.6586% ( 5) 00:08:56.246 31255.631 - 31457.280: 99.6991% ( 7) 00:08:56.246 31457.280 - 31658.929: 99.7454% ( 8) 00:08:56.246 31658.929 - 31860.578: 99.7801% ( 6) 00:08:56.246 31860.578 - 32062.228: 99.8206% ( 7) 00:08:56.246 32062.228 - 32263.877: 99.8553% ( 6) 00:08:56.246 32263.877 - 32465.526: 99.8958% ( 7) 00:08:56.246 32465.526 - 32667.175: 99.9363% ( 7) 00:08:56.246 32667.175 - 32868.825: 99.9826% ( 8) 00:08:56.246 32868.825 - 33070.474: 100.0000% ( 3) 00:08:56.246 00:08:56.246 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.246 ============================================================================== 00:08:56.246 Range in us Cumulative IO count 00:08:56.246 5721.797 - 5747.003: 0.0116% ( 2) 00:08:56.246 5772.209 - 5797.415: 0.0174% ( 1) 00:08:56.246 5822.622 - 5847.828: 0.0231% ( 1) 00:08:56.246 5847.828 - 5873.034: 0.0289% ( 1) 00:08:56.246 5923.446 - 5948.652: 0.0521% ( 4) 00:08:56.246 5948.652 - 5973.858: 0.1100% ( 10) 00:08:56.246 5973.858 - 5999.065: 0.2025% ( 16) 00:08:56.246 5999.065 - 6024.271: 0.3241% ( 21) 00:08:56.246 6024.271 - 6049.477: 0.4803% ( 27) 00:08:56.246 6049.477 - 6074.683: 1.2269% ( 129) 00:08:56.246 6074.683 - 6099.889: 1.7014% ( 82) 00:08:56.246 6099.889 - 6125.095: 2.3958% ( 120) 00:08:56.246 6125.095 - 6150.302: 2.9340% ( 93) 00:08:56.246 6150.302 - 6175.508: 3.9178% ( 170) 00:08:56.246 6175.508 - 6200.714: 5.3472% ( 247) 00:08:56.246 6200.714 - 6225.920: 7.4884% ( 370) 00:08:56.246 6225.920 - 6251.126: 9.9595% ( 427) 00:08:56.246 6251.126 - 6276.332: 12.4942% ( 438) 00:08:56.246 6276.332 - 6301.538: 14.0509% ( 269) 00:08:56.246 6301.538 - 6326.745: 15.7581% ( 295) 00:08:56.246 6326.745 - 6351.951: 17.6215% ( 322) 00:08:56.246 6351.951 - 6377.157: 18.9931% ( 237) 00:08:56.246 6377.157 - 6402.363: 20.6829% ( 292) 00:08:56.246 6402.363 - 6427.569: 22.5579% ( 324) 00:08:56.246 6427.569 - 6452.775: 24.7627% ( 381) 00:08:56.246 6452.775 - 6503.188: 28.1771% ( 590) 00:08:56.246 6503.188 - 6553.600: 32.7488% ( 790) 00:08:56.246 6553.600 - 6604.012: 37.0891% ( 750) 00:08:56.246 6604.012 - 6654.425: 41.1227% ( 697) 00:08:56.246 6654.425 - 6704.837: 45.0231% ( 674) 00:08:56.246 6704.837 - 6755.249: 48.5938% ( 617) 00:08:56.246 6755.249 - 6805.662: 53.1771% ( 792) 00:08:56.246 6805.662 - 6856.074: 56.8056% ( 627) 00:08:56.246 6856.074 - 6906.486: 59.7164% ( 503) 00:08:56.246 6906.486 - 6956.898: 63.1366% ( 591) 00:08:56.246 6956.898 - 7007.311: 65.5035% ( 409) 00:08:56.246 7007.311 - 7057.723: 67.8299% ( 402) 00:08:56.246 7057.723 - 7108.135: 70.6192% ( 482) 00:08:56.246 7108.135 - 7158.548: 72.6910% ( 358) 00:08:56.246 7158.548 - 7208.960: 74.4039% ( 296) 00:08:56.246 7208.960 - 7259.372: 76.0764% ( 289) 00:08:56.246 7259.372 - 7309.785: 77.4306% ( 234) 00:08:56.246 7309.785 - 7360.197: 78.8194% ( 240) 00:08:56.246 7360.197 - 7410.609: 80.1620% ( 232) 00:08:56.246 7410.609 - 7461.022: 81.1458% ( 170) 00:08:56.246 7461.022 - 7511.434: 81.9792% ( 144) 00:08:56.246 7511.434 - 7561.846: 82.7951% ( 141) 00:08:56.246 7561.846 - 7612.258: 83.5475% ( 130) 00:08:56.246 7612.258 - 7662.671: 84.0683% ( 90) 00:08:56.246 7662.671 - 7713.083: 84.5428% ( 82) 00:08:56.246 7713.083 - 7763.495: 85.0463% ( 87) 00:08:56.246 7763.495 - 7813.908: 85.2951% ( 43) 00:08:56.246 7813.908 - 7864.320: 85.5266% ( 40) 00:08:56.246 7864.320 - 7914.732: 85.7755% ( 43) 00:08:56.246 7914.732 - 7965.145: 86.0012% ( 39) 00:08:56.246 7965.145 - 8015.557: 86.2789% ( 48) 00:08:56.246 8015.557 - 8065.969: 86.8345% ( 96) 00:08:56.246 8065.969 - 8116.382: 87.0949% ( 45) 00:08:56.246 8116.382 - 8166.794: 87.3843% ( 50) 00:08:56.246 8166.794 - 8217.206: 87.6562% ( 47) 00:08:56.246 8217.206 - 8267.618: 87.9282% ( 47) 00:08:56.246 8267.618 - 8318.031: 88.1192% ( 33) 00:08:56.246 8318.031 - 8368.443: 88.3854% ( 46) 00:08:56.246 8368.443 - 8418.855: 88.6227% ( 41) 00:08:56.246 8418.855 - 8469.268: 88.7905% ( 29) 00:08:56.246 8469.268 - 8519.680: 88.9062% ( 20) 00:08:56.246 8519.680 - 8570.092: 89.0799% ( 30) 00:08:56.246 8570.092 - 8620.505: 89.2708% ( 33) 00:08:56.246 8620.505 - 8670.917: 89.4676% ( 34) 00:08:56.246 8670.917 - 8721.329: 89.6933% ( 39) 00:08:56.246 8721.329 - 8771.742: 89.9016% ( 36) 00:08:56.246 8771.742 - 8822.154: 90.2431% ( 59) 00:08:56.246 8822.154 - 8872.566: 90.4514% ( 36) 00:08:56.246 8872.566 - 8922.978: 90.6134% ( 28) 00:08:56.246 8922.978 - 8973.391: 90.7639% ( 26) 00:08:56.246 8973.391 - 9023.803: 90.8854% ( 21) 00:08:56.246 9023.803 - 9074.215: 91.0069% ( 21) 00:08:56.246 9074.215 - 9124.628: 91.1458% ( 24) 00:08:56.246 9124.628 - 9175.040: 91.2153% ( 12) 00:08:56.246 9175.040 - 9225.452: 91.2674% ( 9) 00:08:56.246 9225.452 - 9275.865: 91.3252% ( 10) 00:08:56.246 9275.865 - 9326.277: 91.3831% ( 10) 00:08:56.246 9326.277 - 9376.689: 91.4410% ( 10) 00:08:56.246 9376.689 - 9427.102: 91.5162% ( 13) 00:08:56.246 9427.102 - 9477.514: 91.5741% ( 10) 00:08:56.246 9477.514 - 9527.926: 91.6262% ( 9) 00:08:56.246 9527.926 - 9578.338: 91.6493% ( 4) 00:08:56.246 9578.338 - 9628.751: 91.6840% ( 6) 00:08:56.246 9628.751 - 9679.163: 91.7072% ( 4) 00:08:56.246 9679.163 - 9729.575: 91.7535% ( 8) 00:08:56.246 9729.575 - 9779.988: 91.8171% ( 11) 00:08:56.246 9779.988 - 9830.400: 91.8750% ( 10) 00:08:56.246 9830.400 - 9880.812: 91.9618% ( 15) 00:08:56.246 9880.812 - 9931.225: 92.0602% ( 17) 00:08:56.246 9931.225 - 9981.637: 92.1296% ( 12) 00:08:56.246 9981.637 - 10032.049: 92.2164% ( 15) 00:08:56.246 10032.049 - 10082.462: 92.4711% ( 44) 00:08:56.246 10082.462 - 10132.874: 92.5810% ( 19) 00:08:56.246 10132.874 - 10183.286: 92.7257% ( 25) 00:08:56.246 10183.286 - 10233.698: 92.8067% ( 14) 00:08:56.246 10233.698 - 10284.111: 92.8588% ( 9) 00:08:56.246 10284.111 - 10334.523: 92.9167% ( 10) 00:08:56.246 10334.523 - 10384.935: 92.9803% ( 11) 00:08:56.246 10384.935 - 10435.348: 93.0440% ( 11) 00:08:56.246 10435.348 - 10485.760: 93.1308% ( 15) 00:08:56.246 10485.760 - 10536.172: 93.2118% ( 14) 00:08:56.246 10536.172 - 10586.585: 93.2986% ( 15) 00:08:56.246 10586.585 - 10636.997: 93.3623% ( 11) 00:08:56.246 10636.997 - 10687.409: 93.4317% ( 12) 00:08:56.246 10687.409 - 10737.822: 93.5475% ( 20) 00:08:56.246 10737.822 - 10788.234: 93.5995% ( 9) 00:08:56.246 10788.234 - 10838.646: 93.6227% ( 4) 00:08:56.246 10838.646 - 10889.058: 93.6690% ( 8) 00:08:56.246 10889.058 - 10939.471: 93.7037% ( 6) 00:08:56.246 10939.471 - 10989.883: 93.7326% ( 5) 00:08:56.246 10989.883 - 11040.295: 93.7674% ( 6) 00:08:56.246 11040.295 - 11090.708: 93.8079% ( 7) 00:08:56.246 11090.708 - 11141.120: 93.8600% ( 9) 00:08:56.246 11141.120 - 11191.532: 93.9062% ( 8) 00:08:56.246 11191.532 - 11241.945: 93.9757% ( 12) 00:08:56.246 11241.945 - 11292.357: 94.0625% ( 15) 00:08:56.246 11292.357 - 11342.769: 94.1204% ( 10) 00:08:56.246 11342.769 - 11393.182: 94.1493% ( 5) 00:08:56.246 11393.182 - 11443.594: 94.1725% ( 4) 00:08:56.246 11443.594 - 11494.006: 94.2014% ( 5) 00:08:56.246 11494.006 - 11544.418: 94.2477% ( 8) 00:08:56.246 11544.418 - 11594.831: 94.2940% ( 8) 00:08:56.246 11594.831 - 11645.243: 94.3345% ( 7) 00:08:56.246 11645.243 - 11695.655: 94.3808% ( 8) 00:08:56.246 11695.655 - 11746.068: 94.4213% ( 7) 00:08:56.246 11746.068 - 11796.480: 94.4618% ( 7) 00:08:56.246 11796.480 - 11846.892: 94.5255% ( 11) 00:08:56.246 11846.892 - 11897.305: 94.6123% ( 15) 00:08:56.246 11897.305 - 11947.717: 94.6586% ( 8) 00:08:56.246 11947.717 - 11998.129: 94.7164% ( 10) 00:08:56.246 11998.129 - 12048.542: 94.7627% ( 8) 00:08:56.246 12048.542 - 12098.954: 94.7975% ( 6) 00:08:56.246 12098.954 - 12149.366: 94.8380% ( 7) 00:08:56.246 12149.366 - 12199.778: 94.8843% ( 8) 00:08:56.246 12199.778 - 12250.191: 94.9306% ( 8) 00:08:56.246 12250.191 - 12300.603: 94.9826% ( 9) 00:08:56.246 12300.603 - 12351.015: 95.0810% ( 17) 00:08:56.246 12351.015 - 12401.428: 95.1736% ( 16) 00:08:56.246 12401.428 - 12451.840: 95.2373% ( 11) 00:08:56.246 12451.840 - 12502.252: 95.3125% ( 13) 00:08:56.246 12502.252 - 12552.665: 95.3993% ( 15) 00:08:56.246 12552.665 - 12603.077: 95.5035% ( 18) 00:08:56.246 12603.077 - 12653.489: 95.6134% ( 19) 00:08:56.246 12653.489 - 12703.902: 95.7639% ( 26) 00:08:56.246 12703.902 - 12754.314: 96.0706% ( 53) 00:08:56.246 12754.314 - 12804.726: 96.2095% ( 24) 00:08:56.246 12804.726 - 12855.138: 96.2847% ( 13) 00:08:56.246 12855.138 - 12905.551: 96.3484% ( 11) 00:08:56.246 12905.551 - 13006.375: 96.4988% ( 26) 00:08:56.246 13006.375 - 13107.200: 96.6898% ( 33) 00:08:56.246 13107.200 - 13208.025: 96.8519% ( 28) 00:08:56.246 13208.025 - 13308.849: 97.2685% ( 72) 00:08:56.246 13308.849 - 13409.674: 97.4074% ( 24) 00:08:56.246 13409.674 - 13510.498: 97.4884% ( 14) 00:08:56.247 13510.498 - 13611.323: 97.5521% ( 11) 00:08:56.247 13611.323 - 13712.148: 97.5868% ( 6) 00:08:56.247 13712.148 - 13812.972: 97.7025% ( 20) 00:08:56.247 13812.972 - 13913.797: 97.8819% ( 31) 00:08:56.247 13913.797 - 14014.622: 97.9398% ( 10) 00:08:56.247 14014.622 - 14115.446: 98.0208% ( 14) 00:08:56.247 14115.446 - 14216.271: 98.2060% ( 32) 00:08:56.247 14216.271 - 14317.095: 98.3218% ( 20) 00:08:56.247 14317.095 - 14417.920: 98.4606% ( 24) 00:08:56.247 14417.920 - 14518.745: 98.6285% ( 29) 00:08:56.247 14518.745 - 14619.569: 98.6863% ( 10) 00:08:56.247 14619.569 - 14720.394: 98.7442% ( 10) 00:08:56.247 14720.394 - 14821.218: 98.8021% ( 10) 00:08:56.247 14821.218 - 14922.043: 98.8252% ( 4) 00:08:56.247 14922.043 - 15022.868: 98.8542% ( 5) 00:08:56.247 15022.868 - 15123.692: 98.8715% ( 3) 00:08:56.247 15123.692 - 15224.517: 98.8889% ( 3) 00:08:56.247 15224.517 - 15325.342: 98.9120% ( 4) 00:08:56.247 15325.342 - 15426.166: 98.9236% ( 2) 00:08:56.247 15426.166 - 15526.991: 98.9525% ( 5) 00:08:56.247 15526.991 - 15627.815: 99.0278% ( 13) 00:08:56.247 15627.815 - 15728.640: 99.0625% ( 6) 00:08:56.247 15728.640 - 15829.465: 99.0914% ( 5) 00:08:56.247 15829.465 - 15930.289: 99.1204% ( 5) 00:08:56.247 15930.289 - 16031.114: 99.1609% ( 7) 00:08:56.247 16031.114 - 16131.938: 99.2014% ( 7) 00:08:56.247 16131.938 - 16232.763: 99.2419% ( 7) 00:08:56.247 16232.763 - 16333.588: 99.2593% ( 3) 00:08:56.247 24097.083 - 24197.908: 99.2766% ( 3) 00:08:56.247 24197.908 - 24298.732: 99.2940% ( 3) 00:08:56.247 24298.732 - 24399.557: 99.3171% ( 4) 00:08:56.247 24399.557 - 24500.382: 99.3403% ( 4) 00:08:56.247 24500.382 - 24601.206: 99.3634% ( 4) 00:08:56.247 24601.206 - 24702.031: 99.3808% ( 3) 00:08:56.247 24702.031 - 24802.855: 99.4039% ( 4) 00:08:56.247 24802.855 - 24903.680: 99.4271% ( 4) 00:08:56.247 24903.680 - 25004.505: 99.4444% ( 3) 00:08:56.247 25004.505 - 25105.329: 99.4676% ( 4) 00:08:56.247 25105.329 - 25206.154: 99.4907% ( 4) 00:08:56.247 25206.154 - 25306.978: 99.5081% ( 3) 00:08:56.247 25306.978 - 25407.803: 99.5312% ( 4) 00:08:56.247 25407.803 - 25508.628: 99.5544% ( 4) 00:08:56.247 25508.628 - 25609.452: 99.5718% ( 3) 00:08:56.247 25609.452 - 25710.277: 99.5949% ( 4) 00:08:56.247 25710.277 - 25811.102: 99.6181% ( 4) 00:08:56.247 25811.102 - 26012.751: 99.6296% ( 2) 00:08:56.247 29239.138 - 29440.788: 99.6701% ( 7) 00:08:56.247 29440.788 - 29642.437: 99.7106% ( 7) 00:08:56.247 29642.437 - 29844.086: 99.7512% ( 7) 00:08:56.247 29844.086 - 30045.735: 99.7975% ( 8) 00:08:56.247 30045.735 - 30247.385: 99.8380% ( 7) 00:08:56.247 30247.385 - 30449.034: 99.8785% ( 7) 00:08:56.247 30449.034 - 30650.683: 99.9190% ( 7) 00:08:56.247 30650.683 - 30852.332: 99.9653% ( 8) 00:08:56.247 30852.332 - 31053.982: 100.0000% ( 6) 00:08:56.247 00:08:56.247 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.247 ============================================================================== 00:08:56.247 Range in us Cumulative IO count 00:08:56.247 5747.003 - 5772.209: 0.0058% ( 1) 00:08:56.247 5797.415 - 5822.622: 0.0405% ( 6) 00:08:56.247 5822.622 - 5847.828: 0.0579% ( 3) 00:08:56.247 5847.828 - 5873.034: 0.0926% ( 6) 00:08:56.247 5873.034 - 5898.240: 0.1215% ( 5) 00:08:56.247 5898.240 - 5923.446: 0.2315% ( 19) 00:08:56.247 5923.446 - 5948.652: 0.3762% ( 25) 00:08:56.247 5948.652 - 5973.858: 0.5150% ( 24) 00:08:56.247 5973.858 - 5999.065: 0.7292% ( 37) 00:08:56.247 5999.065 - 6024.271: 0.9838% ( 44) 00:08:56.247 6024.271 - 6049.477: 1.5799% ( 103) 00:08:56.247 6049.477 - 6074.683: 2.2106% ( 109) 00:08:56.247 6074.683 - 6099.889: 2.7604% ( 95) 00:08:56.247 6099.889 - 6125.095: 3.5359% ( 134) 00:08:56.247 6125.095 - 6150.302: 4.3113% ( 134) 00:08:56.247 6150.302 - 6175.508: 5.5150% ( 208) 00:08:56.247 6175.508 - 6200.714: 6.8981% ( 239) 00:08:56.247 6200.714 - 6225.920: 8.4317% ( 265) 00:08:56.247 6225.920 - 6251.126: 10.4282% ( 345) 00:08:56.247 6251.126 - 6276.332: 12.6273% ( 380) 00:08:56.247 6276.332 - 6301.538: 14.3924% ( 305) 00:08:56.247 6301.538 - 6326.745: 16.0359% ( 284) 00:08:56.247 6326.745 - 6351.951: 17.6968% ( 287) 00:08:56.247 6351.951 - 6377.157: 19.5949% ( 328) 00:08:56.247 6377.157 - 6402.363: 21.7188% ( 367) 00:08:56.247 6402.363 - 6427.569: 23.3449% ( 281) 00:08:56.247 6427.569 - 6452.775: 25.2199% ( 324) 00:08:56.247 6452.775 - 6503.188: 28.9583% ( 646) 00:08:56.247 6503.188 - 6553.600: 33.0671% ( 710) 00:08:56.247 6553.600 - 6604.012: 37.3669% ( 743) 00:08:56.247 6604.012 - 6654.425: 40.9780% ( 624) 00:08:56.247 6654.425 - 6704.837: 45.5845% ( 796) 00:08:56.247 6704.837 - 6755.249: 48.9583% ( 583) 00:08:56.247 6755.249 - 6805.662: 52.8588% ( 674) 00:08:56.247 6805.662 - 6856.074: 56.5741% ( 642) 00:08:56.247 6856.074 - 6906.486: 59.7627% ( 551) 00:08:56.247 6906.486 - 6956.898: 62.7025% ( 508) 00:08:56.247 6956.898 - 7007.311: 66.0590% ( 580) 00:08:56.247 7007.311 - 7057.723: 68.2407% ( 377) 00:08:56.247 7057.723 - 7108.135: 70.1273% ( 326) 00:08:56.247 7108.135 - 7158.548: 72.7431% ( 452) 00:08:56.247 7158.548 - 7208.960: 74.6470% ( 329) 00:08:56.247 7208.960 - 7259.372: 76.1863% ( 266) 00:08:56.247 7259.372 - 7309.785: 77.2049% ( 176) 00:08:56.247 7309.785 - 7360.197: 78.5880% ( 239) 00:08:56.247 7360.197 - 7410.609: 79.8900% ( 225) 00:08:56.247 7410.609 - 7461.022: 81.0243% ( 196) 00:08:56.247 7461.022 - 7511.434: 81.8229% ( 138) 00:08:56.247 7511.434 - 7561.846: 82.7894% ( 167) 00:08:56.247 7561.846 - 7612.258: 83.4144% ( 108) 00:08:56.247 7612.258 - 7662.671: 83.9120% ( 86) 00:08:56.247 7662.671 - 7713.083: 84.4850% ( 99) 00:08:56.247 7713.083 - 7763.495: 84.9653% ( 83) 00:08:56.247 7763.495 - 7813.908: 85.2199% ( 44) 00:08:56.247 7813.908 - 7864.320: 85.4167% ( 34) 00:08:56.247 7864.320 - 7914.732: 85.7060% ( 50) 00:08:56.247 7914.732 - 7965.145: 85.8738% ( 29) 00:08:56.247 7965.145 - 8015.557: 86.0475% ( 30) 00:08:56.247 8015.557 - 8065.969: 86.3715% ( 56) 00:08:56.247 8065.969 - 8116.382: 86.5625% ( 33) 00:08:56.247 8116.382 - 8166.794: 86.7882% ( 39) 00:08:56.247 8166.794 - 8217.206: 87.1181% ( 57) 00:08:56.247 8217.206 - 8267.618: 87.2627% ( 25) 00:08:56.247 8267.618 - 8318.031: 87.4595% ( 34) 00:08:56.247 8318.031 - 8368.443: 87.6852% ( 39) 00:08:56.247 8368.443 - 8418.855: 87.9572% ( 47) 00:08:56.247 8418.855 - 8469.268: 88.1424% ( 32) 00:08:56.247 8469.268 - 8519.680: 88.4086% ( 46) 00:08:56.247 8519.680 - 8570.092: 88.5243% ( 20) 00:08:56.247 8570.092 - 8620.505: 88.7095% ( 32) 00:08:56.247 8620.505 - 8670.917: 88.9005% ( 33) 00:08:56.247 8670.917 - 8721.329: 89.0625% ( 28) 00:08:56.247 8721.329 - 8771.742: 89.2188% ( 27) 00:08:56.247 8771.742 - 8822.154: 89.3634% ( 25) 00:08:56.247 8822.154 - 8872.566: 89.5139% ( 26) 00:08:56.247 8872.566 - 8922.978: 89.6586% ( 25) 00:08:56.247 8922.978 - 8973.391: 89.8032% ( 25) 00:08:56.247 8973.391 - 9023.803: 89.9421% ( 24) 00:08:56.247 9023.803 - 9074.215: 90.3819% ( 76) 00:08:56.247 9074.215 - 9124.628: 90.5845% ( 35) 00:08:56.247 9124.628 - 9175.040: 90.7697% ( 32) 00:08:56.247 9175.040 - 9225.452: 90.9259% ( 27) 00:08:56.247 9225.452 - 9275.865: 91.1053% ( 31) 00:08:56.247 9275.865 - 9326.277: 91.3252% ( 38) 00:08:56.247 9326.277 - 9376.689: 91.4815% ( 27) 00:08:56.247 9376.689 - 9427.102: 91.5856% ( 18) 00:08:56.247 9427.102 - 9477.514: 91.6493% ( 11) 00:08:56.247 9477.514 - 9527.926: 91.7361% ( 15) 00:08:56.247 9527.926 - 9578.338: 91.8056% ( 12) 00:08:56.247 9578.338 - 9628.751: 91.8634% ( 10) 00:08:56.247 9628.751 - 9679.163: 91.9329% ( 12) 00:08:56.247 9679.163 - 9729.575: 92.0081% ( 13) 00:08:56.247 9729.575 - 9779.988: 92.1007% ( 16) 00:08:56.247 9779.988 - 9830.400: 92.1817% ( 14) 00:08:56.247 9830.400 - 9880.812: 92.3090% ( 22) 00:08:56.247 9880.812 - 9931.225: 92.4248% ( 20) 00:08:56.247 9931.225 - 9981.637: 92.5579% ( 23) 00:08:56.247 9981.637 - 10032.049: 92.6273% ( 12) 00:08:56.247 10032.049 - 10082.462: 92.6968% ( 12) 00:08:56.247 10082.462 - 10132.874: 92.7894% ( 16) 00:08:56.247 10132.874 - 10183.286: 92.8877% ( 17) 00:08:56.247 10183.286 - 10233.698: 92.9514% ( 11) 00:08:56.247 10233.698 - 10284.111: 93.0208% ( 12) 00:08:56.247 10284.111 - 10334.523: 93.0903% ( 12) 00:08:56.247 10334.523 - 10384.935: 93.1250% ( 6) 00:08:56.247 10384.935 - 10435.348: 93.1597% ( 6) 00:08:56.247 10435.348 - 10485.760: 93.2176% ( 10) 00:08:56.247 10485.760 - 10536.172: 93.2697% ( 9) 00:08:56.247 10536.172 - 10586.585: 93.3507% ( 14) 00:08:56.247 10586.585 - 10636.997: 93.4201% ( 12) 00:08:56.247 10636.997 - 10687.409: 93.4954% ( 13) 00:08:56.247 10687.409 - 10737.822: 93.5706% ( 13) 00:08:56.247 10737.822 - 10788.234: 93.6458% ( 13) 00:08:56.247 10788.234 - 10838.646: 93.7211% ( 13) 00:08:56.247 10838.646 - 10889.058: 93.8484% ( 22) 00:08:56.247 10889.058 - 10939.471: 93.9352% ( 15) 00:08:56.247 10939.471 - 10989.883: 94.0509% ( 20) 00:08:56.247 10989.883 - 11040.295: 94.1204% ( 12) 00:08:56.247 11040.295 - 11090.708: 94.1551% ( 6) 00:08:56.247 11090.708 - 11141.120: 94.2188% ( 11) 00:08:56.247 11141.120 - 11191.532: 94.2940% ( 13) 00:08:56.247 11191.532 - 11241.945: 94.3461% ( 9) 00:08:56.247 11241.945 - 11292.357: 94.3924% ( 8) 00:08:56.247 11292.357 - 11342.769: 94.4387% ( 8) 00:08:56.247 11342.769 - 11393.182: 94.4850% ( 8) 00:08:56.247 11393.182 - 11443.594: 94.5255% ( 7) 00:08:56.247 11443.594 - 11494.006: 94.5660% ( 7) 00:08:56.247 11494.006 - 11544.418: 94.6007% ( 6) 00:08:56.247 11544.418 - 11594.831: 94.6412% ( 7) 00:08:56.247 11594.831 - 11645.243: 94.6875% ( 8) 00:08:56.247 11645.243 - 11695.655: 94.7917% ( 18) 00:08:56.247 11695.655 - 11746.068: 94.9016% ( 19) 00:08:56.247 11746.068 - 11796.480: 94.9769% ( 13) 00:08:56.247 11796.480 - 11846.892: 95.0174% ( 7) 00:08:56.248 11846.892 - 11897.305: 95.0521% ( 6) 00:08:56.248 11897.305 - 11947.717: 95.0868% ( 6) 00:08:56.248 11947.717 - 11998.129: 95.1157% ( 5) 00:08:56.248 11998.129 - 12048.542: 95.1505% ( 6) 00:08:56.248 12048.542 - 12098.954: 95.1794% ( 5) 00:08:56.248 12098.954 - 12149.366: 95.1968% ( 3) 00:08:56.248 12149.366 - 12199.778: 95.2315% ( 6) 00:08:56.248 12199.778 - 12250.191: 95.2488% ( 3) 00:08:56.248 12250.191 - 12300.603: 95.2778% ( 5) 00:08:56.248 12300.603 - 12351.015: 95.3125% ( 6) 00:08:56.248 12351.015 - 12401.428: 95.3356% ( 4) 00:08:56.248 12401.428 - 12451.840: 95.3704% ( 6) 00:08:56.248 12451.840 - 12502.252: 95.3993% ( 5) 00:08:56.248 12502.252 - 12552.665: 95.4167% ( 3) 00:08:56.248 12552.665 - 12603.077: 95.4630% ( 8) 00:08:56.248 12603.077 - 12653.489: 95.5093% ( 8) 00:08:56.248 12653.489 - 12703.902: 95.5613% ( 9) 00:08:56.248 12703.902 - 12754.314: 95.6308% ( 12) 00:08:56.248 12754.314 - 12804.726: 95.8333% ( 35) 00:08:56.248 12804.726 - 12855.138: 95.8912% ( 10) 00:08:56.248 12855.138 - 12905.551: 95.9433% ( 9) 00:08:56.248 12905.551 - 13006.375: 96.0359% ( 16) 00:08:56.248 13006.375 - 13107.200: 96.0880% ( 9) 00:08:56.248 13107.200 - 13208.025: 96.1285% ( 7) 00:08:56.248 13208.025 - 13308.849: 96.2558% ( 22) 00:08:56.248 13308.849 - 13409.674: 96.6956% ( 76) 00:08:56.248 13409.674 - 13510.498: 96.8924% ( 34) 00:08:56.248 13510.498 - 13611.323: 97.1123% ( 38) 00:08:56.248 13611.323 - 13712.148: 97.5289% ( 72) 00:08:56.248 13712.148 - 13812.972: 97.7488% ( 38) 00:08:56.248 13812.972 - 13913.797: 97.9340% ( 32) 00:08:56.248 13913.797 - 14014.622: 98.0729% ( 24) 00:08:56.248 14014.622 - 14115.446: 98.1829% ( 19) 00:08:56.248 14115.446 - 14216.271: 98.2697% ( 15) 00:08:56.248 14216.271 - 14317.095: 98.3391% ( 12) 00:08:56.248 14317.095 - 14417.920: 98.3796% ( 7) 00:08:56.248 14417.920 - 14518.745: 98.4259% ( 8) 00:08:56.248 14518.745 - 14619.569: 98.4896% ( 11) 00:08:56.248 14619.569 - 14720.394: 98.5359% ( 8) 00:08:56.248 14720.394 - 14821.218: 98.5995% ( 11) 00:08:56.248 14821.218 - 14922.043: 98.6285% ( 5) 00:08:56.248 14922.043 - 15022.868: 98.6632% ( 6) 00:08:56.248 15022.868 - 15123.692: 98.7558% ( 16) 00:08:56.248 15123.692 - 15224.517: 98.9931% ( 41) 00:08:56.248 15224.517 - 15325.342: 99.1088% ( 20) 00:08:56.248 15325.342 - 15426.166: 99.1609% ( 9) 00:08:56.248 15426.166 - 15526.991: 99.1956% ( 6) 00:08:56.248 15526.991 - 15627.815: 99.2245% ( 5) 00:08:56.248 15627.815 - 15728.640: 99.2593% ( 6) 00:08:56.248 22282.240 - 22383.065: 99.2766% ( 3) 00:08:56.248 22383.065 - 22483.889: 99.2940% ( 3) 00:08:56.248 22483.889 - 22584.714: 99.3171% ( 4) 00:08:56.248 22584.714 - 22685.538: 99.3345% ( 3) 00:08:56.248 22685.538 - 22786.363: 99.3576% ( 4) 00:08:56.248 22786.363 - 22887.188: 99.3808% ( 4) 00:08:56.248 22887.188 - 22988.012: 99.3981% ( 3) 00:08:56.248 22988.012 - 23088.837: 99.4213% ( 4) 00:08:56.248 23088.837 - 23189.662: 99.4444% ( 4) 00:08:56.248 23189.662 - 23290.486: 99.4618% ( 3) 00:08:56.248 23290.486 - 23391.311: 99.4850% ( 4) 00:08:56.248 23391.311 - 23492.135: 99.5081% ( 4) 00:08:56.248 23492.135 - 23592.960: 99.5255% ( 3) 00:08:56.248 23592.960 - 23693.785: 99.5486% ( 4) 00:08:56.248 23693.785 - 23794.609: 99.5660% ( 3) 00:08:56.248 23794.609 - 23895.434: 99.5833% ( 3) 00:08:56.248 23895.434 - 23996.258: 99.6007% ( 3) 00:08:56.248 23996.258 - 24097.083: 99.6238% ( 4) 00:08:56.248 24097.083 - 24197.908: 99.6296% ( 1) 00:08:56.248 27222.646 - 27424.295: 99.6412% ( 2) 00:08:56.248 27424.295 - 27625.945: 99.6817% ( 7) 00:08:56.248 27625.945 - 27827.594: 99.7222% ( 7) 00:08:56.248 27827.594 - 28029.243: 99.7569% ( 6) 00:08:56.248 28029.243 - 28230.892: 99.7975% ( 7) 00:08:56.248 28230.892 - 28432.542: 99.8438% ( 8) 00:08:56.248 28432.542 - 28634.191: 99.8843% ( 7) 00:08:56.248 28634.191 - 28835.840: 99.9248% ( 7) 00:08:56.248 28835.840 - 29037.489: 99.9711% ( 8) 00:08:56.248 29037.489 - 29239.138: 100.0000% ( 5) 00:08:56.248 00:08:56.248 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.248 ============================================================================== 00:08:56.248 Range in us Cumulative IO count 00:08:56.248 5646.178 - 5671.385: 0.0058% ( 1) 00:08:56.248 5721.797 - 5747.003: 0.0231% ( 3) 00:08:56.248 5747.003 - 5772.209: 0.0289% ( 1) 00:08:56.248 5797.415 - 5822.622: 0.0347% ( 1) 00:08:56.248 5822.622 - 5847.828: 0.0405% ( 1) 00:08:56.248 5847.828 - 5873.034: 0.0463% ( 1) 00:08:56.248 5873.034 - 5898.240: 0.1042% ( 10) 00:08:56.248 5898.240 - 5923.446: 0.1852% ( 14) 00:08:56.248 5923.446 - 5948.652: 0.2894% ( 18) 00:08:56.248 5948.652 - 5973.858: 0.4109% ( 21) 00:08:56.248 5973.858 - 5999.065: 0.5498% ( 24) 00:08:56.248 5999.065 - 6024.271: 0.7697% ( 38) 00:08:56.248 6024.271 - 6049.477: 1.1921% ( 73) 00:08:56.248 6049.477 - 6074.683: 1.8576% ( 115) 00:08:56.248 6074.683 - 6099.889: 2.3495% ( 85) 00:08:56.248 6099.889 - 6125.095: 3.1424% ( 137) 00:08:56.248 6125.095 - 6150.302: 4.1667% ( 177) 00:08:56.248 6150.302 - 6175.508: 5.2431% ( 186) 00:08:56.248 6175.508 - 6200.714: 6.6782% ( 248) 00:08:56.248 6200.714 - 6225.920: 8.1481% ( 254) 00:08:56.248 6225.920 - 6251.126: 10.1042% ( 338) 00:08:56.248 6251.126 - 6276.332: 12.2106% ( 364) 00:08:56.248 6276.332 - 6301.538: 14.4039% ( 379) 00:08:56.248 6301.538 - 6326.745: 16.4352% ( 351) 00:08:56.248 6326.745 - 6351.951: 17.8704% ( 248) 00:08:56.248 6351.951 - 6377.157: 19.3287% ( 252) 00:08:56.248 6377.157 - 6402.363: 20.9086% ( 273) 00:08:56.248 6402.363 - 6427.569: 22.8067% ( 328) 00:08:56.248 6427.569 - 6452.775: 24.4444% ( 283) 00:08:56.248 6452.775 - 6503.188: 28.2176% ( 652) 00:08:56.248 6503.188 - 6553.600: 32.8125% ( 794) 00:08:56.248 6553.600 - 6604.012: 37.6620% ( 838) 00:08:56.248 6604.012 - 6654.425: 41.0301% ( 582) 00:08:56.248 6654.425 - 6704.837: 44.4676% ( 594) 00:08:56.248 6704.837 - 6755.249: 48.9410% ( 773) 00:08:56.248 6755.249 - 6805.662: 52.6620% ( 643) 00:08:56.248 6805.662 - 6856.074: 56.8981% ( 732) 00:08:56.248 6856.074 - 6906.486: 60.2720% ( 583) 00:08:56.248 6906.486 - 6956.898: 63.8715% ( 622) 00:08:56.248 6956.898 - 7007.311: 67.6042% ( 645) 00:08:56.248 7007.311 - 7057.723: 69.7859% ( 377) 00:08:56.248 7057.723 - 7108.135: 72.1759% ( 413) 00:08:56.248 7108.135 - 7158.548: 74.1088% ( 334) 00:08:56.248 7158.548 - 7208.960: 75.5440% ( 248) 00:08:56.248 7208.960 - 7259.372: 76.7303% ( 205) 00:08:56.248 7259.372 - 7309.785: 77.6100% ( 152) 00:08:56.248 7309.785 - 7360.197: 78.5706% ( 166) 00:08:56.248 7360.197 - 7410.609: 79.6817% ( 192) 00:08:56.248 7410.609 - 7461.022: 80.8160% ( 196) 00:08:56.248 7461.022 - 7511.434: 82.1007% ( 222) 00:08:56.248 7511.434 - 7561.846: 82.8819% ( 135) 00:08:56.248 7561.846 - 7612.258: 83.4259% ( 94) 00:08:56.248 7612.258 - 7662.671: 83.8657% ( 76) 00:08:56.248 7662.671 - 7713.083: 84.3056% ( 76) 00:08:56.248 7713.083 - 7763.495: 84.7569% ( 78) 00:08:56.248 7763.495 - 7813.908: 85.0174% ( 45) 00:08:56.248 7813.908 - 7864.320: 85.4398% ( 73) 00:08:56.248 7864.320 - 7914.732: 85.6829% ( 42) 00:08:56.248 7914.732 - 7965.145: 85.8681% ( 32) 00:08:56.248 7965.145 - 8015.557: 86.1863% ( 55) 00:08:56.248 8015.557 - 8065.969: 86.5220% ( 58) 00:08:56.248 8065.969 - 8116.382: 86.7072% ( 32) 00:08:56.248 8116.382 - 8166.794: 87.0255% ( 55) 00:08:56.248 8166.794 - 8217.206: 87.2627% ( 41) 00:08:56.248 8217.206 - 8267.618: 87.3727% ( 19) 00:08:56.248 8267.618 - 8318.031: 87.5231% ( 26) 00:08:56.248 8318.031 - 8368.443: 87.8009% ( 48) 00:08:56.248 8368.443 - 8418.855: 87.9688% ( 29) 00:08:56.248 8418.855 - 8469.268: 88.1134% ( 25) 00:08:56.248 8469.268 - 8519.680: 88.2118% ( 17) 00:08:56.248 8519.680 - 8570.092: 88.3218% ( 19) 00:08:56.248 8570.092 - 8620.505: 88.4896% ( 29) 00:08:56.248 8620.505 - 8670.917: 88.5995% ( 19) 00:08:56.248 8670.917 - 8721.329: 88.7095% ( 19) 00:08:56.248 8721.329 - 8771.742: 88.8831% ( 30) 00:08:56.248 8771.742 - 8822.154: 89.1088% ( 39) 00:08:56.248 8822.154 - 8872.566: 89.2593% ( 26) 00:08:56.248 8872.566 - 8922.978: 89.3924% ( 23) 00:08:56.248 8922.978 - 8973.391: 89.5370% ( 25) 00:08:56.248 8973.391 - 9023.803: 89.6528% ( 20) 00:08:56.248 9023.803 - 9074.215: 89.7396% ( 15) 00:08:56.248 9074.215 - 9124.628: 89.8669% ( 22) 00:08:56.248 9124.628 - 9175.040: 89.9884% ( 21) 00:08:56.248 9175.040 - 9225.452: 90.1273% ( 24) 00:08:56.248 9225.452 - 9275.865: 90.3472% ( 38) 00:08:56.248 9275.865 - 9326.277: 90.4514% ( 18) 00:08:56.248 9326.277 - 9376.689: 90.5556% ( 18) 00:08:56.248 9376.689 - 9427.102: 90.7060% ( 26) 00:08:56.248 9427.102 - 9477.514: 90.8333% ( 22) 00:08:56.248 9477.514 - 9527.926: 90.9259% ( 16) 00:08:56.248 9527.926 - 9578.338: 91.0590% ( 23) 00:08:56.248 9578.338 - 9628.751: 91.1806% ( 21) 00:08:56.248 9628.751 - 9679.163: 91.2905% ( 19) 00:08:56.248 9679.163 - 9729.575: 91.4005% ( 19) 00:08:56.248 9729.575 - 9779.988: 91.5104% ( 19) 00:08:56.248 9779.988 - 9830.400: 91.6435% ( 23) 00:08:56.248 9830.400 - 9880.812: 91.7824% ( 24) 00:08:56.248 9880.812 - 9931.225: 91.9213% ( 24) 00:08:56.248 9931.225 - 9981.637: 92.0718% ( 26) 00:08:56.248 9981.637 - 10032.049: 92.3611% ( 50) 00:08:56.248 10032.049 - 10082.462: 92.5984% ( 41) 00:08:56.248 10082.462 - 10132.874: 92.7778% ( 31) 00:08:56.248 10132.874 - 10183.286: 92.9282% ( 26) 00:08:56.248 10183.286 - 10233.698: 93.1019% ( 30) 00:08:56.248 10233.698 - 10284.111: 93.2986% ( 34) 00:08:56.248 10284.111 - 10334.523: 93.4028% ( 18) 00:08:56.248 10334.523 - 10384.935: 93.4780% ( 13) 00:08:56.248 10384.935 - 10435.348: 93.5475% ( 12) 00:08:56.248 10435.348 - 10485.760: 93.6053% ( 10) 00:08:56.248 10485.760 - 10536.172: 93.6574% ( 9) 00:08:56.248 10536.172 - 10586.585: 93.7095% ( 9) 00:08:56.248 10586.585 - 10636.997: 93.7674% ( 10) 00:08:56.249 10636.997 - 10687.409: 93.8310% ( 11) 00:08:56.249 10687.409 - 10737.822: 93.8889% ( 10) 00:08:56.249 10737.822 - 10788.234: 93.9699% ( 14) 00:08:56.249 10788.234 - 10838.646: 94.0394% ( 12) 00:08:56.249 10838.646 - 10889.058: 94.1088% ( 12) 00:08:56.249 10889.058 - 10939.471: 94.1609% ( 9) 00:08:56.249 10939.471 - 10989.883: 94.2130% ( 9) 00:08:56.249 10989.883 - 11040.295: 94.2650% ( 9) 00:08:56.249 11040.295 - 11090.708: 94.3229% ( 10) 00:08:56.249 11090.708 - 11141.120: 94.3750% ( 9) 00:08:56.249 11141.120 - 11191.532: 94.4155% ( 7) 00:08:56.249 11191.532 - 11241.945: 94.4618% ( 8) 00:08:56.249 11241.945 - 11292.357: 94.5139% ( 9) 00:08:56.249 11292.357 - 11342.769: 94.5775% ( 11) 00:08:56.249 11342.769 - 11393.182: 94.6933% ( 20) 00:08:56.249 11393.182 - 11443.594: 94.7627% ( 12) 00:08:56.249 11443.594 - 11494.006: 94.8148% ( 9) 00:08:56.249 11494.006 - 11544.418: 94.8611% ( 8) 00:08:56.249 11544.418 - 11594.831: 94.9074% ( 8) 00:08:56.249 11594.831 - 11645.243: 94.9537% ( 8) 00:08:56.249 11645.243 - 11695.655: 94.9942% ( 7) 00:08:56.249 11695.655 - 11746.068: 95.0463% ( 9) 00:08:56.249 11746.068 - 11796.480: 95.0752% ( 5) 00:08:56.249 11796.480 - 11846.892: 95.1215% ( 8) 00:08:56.249 11846.892 - 11897.305: 95.1678% ( 8) 00:08:56.249 11897.305 - 11947.717: 95.1968% ( 5) 00:08:56.249 11947.717 - 11998.129: 95.2199% ( 4) 00:08:56.249 11998.129 - 12048.542: 95.2546% ( 6) 00:08:56.249 12048.542 - 12098.954: 95.2720% ( 3) 00:08:56.249 12098.954 - 12149.366: 95.2951% ( 4) 00:08:56.249 12149.366 - 12199.778: 95.3125% ( 3) 00:08:56.249 12199.778 - 12250.191: 95.3356% ( 4) 00:08:56.249 12250.191 - 12300.603: 95.3472% ( 2) 00:08:56.249 12300.603 - 12351.015: 95.3530% ( 1) 00:08:56.249 12351.015 - 12401.428: 95.3646% ( 2) 00:08:56.249 12401.428 - 12451.840: 95.3704% ( 1) 00:08:56.249 12451.840 - 12502.252: 95.3762% ( 1) 00:08:56.249 12502.252 - 12552.665: 95.3877% ( 2) 00:08:56.249 12552.665 - 12603.077: 95.3935% ( 1) 00:08:56.249 12603.077 - 12653.489: 95.4109% ( 3) 00:08:56.249 12653.489 - 12703.902: 95.4456% ( 6) 00:08:56.249 12703.902 - 12754.314: 95.4803% ( 6) 00:08:56.249 12754.314 - 12804.726: 95.7523% ( 47) 00:08:56.249 12804.726 - 12855.138: 95.7755% ( 4) 00:08:56.249 12855.138 - 12905.551: 95.8275% ( 9) 00:08:56.249 12905.551 - 13006.375: 95.9144% ( 15) 00:08:56.249 13006.375 - 13107.200: 96.0069% ( 16) 00:08:56.249 13107.200 - 13208.025: 96.1343% ( 22) 00:08:56.249 13208.025 - 13308.849: 96.3021% ( 29) 00:08:56.249 13308.849 - 13409.674: 96.8056% ( 87) 00:08:56.249 13409.674 - 13510.498: 97.4595% ( 113) 00:08:56.249 13510.498 - 13611.323: 97.6100% ( 26) 00:08:56.249 13611.323 - 13712.148: 97.7373% ( 22) 00:08:56.249 13712.148 - 13812.972: 97.8125% ( 13) 00:08:56.249 13812.972 - 13913.797: 97.8935% ( 14) 00:08:56.249 13913.797 - 14014.622: 97.9572% ( 11) 00:08:56.249 14014.622 - 14115.446: 98.0498% ( 16) 00:08:56.249 14115.446 - 14216.271: 98.1655% ( 20) 00:08:56.249 14216.271 - 14317.095: 98.2755% ( 19) 00:08:56.249 14317.095 - 14417.920: 98.3565% ( 14) 00:08:56.249 14417.920 - 14518.745: 98.4201% ( 11) 00:08:56.249 14518.745 - 14619.569: 98.4896% ( 12) 00:08:56.249 14619.569 - 14720.394: 98.5185% ( 5) 00:08:56.249 14922.043 - 15022.868: 98.5590% ( 7) 00:08:56.249 15022.868 - 15123.692: 98.8715% ( 54) 00:08:56.249 15123.692 - 15224.517: 98.9815% ( 19) 00:08:56.249 15224.517 - 15325.342: 99.0741% ( 16) 00:08:56.249 15325.342 - 15426.166: 99.1088% ( 6) 00:08:56.249 15426.166 - 15526.991: 99.1377% ( 5) 00:08:56.249 15526.991 - 15627.815: 99.1725% ( 6) 00:08:56.249 15627.815 - 15728.640: 99.2072% ( 6) 00:08:56.249 15728.640 - 15829.465: 99.2477% ( 7) 00:08:56.249 15829.465 - 15930.289: 99.2593% ( 2) 00:08:56.249 20467.397 - 20568.222: 99.2650% ( 1) 00:08:56.249 20568.222 - 20669.046: 99.2824% ( 3) 00:08:56.249 20669.046 - 20769.871: 99.3056% ( 4) 00:08:56.249 20769.871 - 20870.695: 99.3287% ( 4) 00:08:56.249 20870.695 - 20971.520: 99.3519% ( 4) 00:08:56.249 20971.520 - 21072.345: 99.3692% ( 3) 00:08:56.249 21072.345 - 21173.169: 99.3924% ( 4) 00:08:56.249 21173.169 - 21273.994: 99.4155% ( 4) 00:08:56.249 21273.994 - 21374.818: 99.4329% ( 3) 00:08:56.249 21374.818 - 21475.643: 99.4502% ( 3) 00:08:56.249 21475.643 - 21576.468: 99.4734% ( 4) 00:08:56.249 21576.468 - 21677.292: 99.4965% ( 4) 00:08:56.249 21677.292 - 21778.117: 99.5139% ( 3) 00:08:56.249 21778.117 - 21878.942: 99.5370% ( 4) 00:08:56.249 21878.942 - 21979.766: 99.5544% ( 3) 00:08:56.249 21979.766 - 22080.591: 99.5775% ( 4) 00:08:56.249 22080.591 - 22181.415: 99.5949% ( 3) 00:08:56.249 22181.415 - 22282.240: 99.6181% ( 4) 00:08:56.249 22282.240 - 22383.065: 99.6296% ( 2) 00:08:56.249 25407.803 - 25508.628: 99.6528% ( 4) 00:08:56.249 25508.628 - 25609.452: 99.6759% ( 4) 00:08:56.249 25609.452 - 25710.277: 99.6933% ( 3) 00:08:56.249 25710.277 - 25811.102: 99.7164% ( 4) 00:08:56.249 25811.102 - 26012.751: 99.7569% ( 7) 00:08:56.249 26012.751 - 26214.400: 99.7975% ( 7) 00:08:56.249 26214.400 - 26416.049: 99.8380% ( 7) 00:08:56.249 26416.049 - 26617.698: 99.8785% ( 7) 00:08:56.249 26617.698 - 26819.348: 99.9190% ( 7) 00:08:56.249 26819.348 - 27020.997: 99.9595% ( 7) 00:08:56.249 27020.997 - 27222.646: 100.0000% ( 7) 00:08:56.249 00:08:56.249 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.249 ============================================================================== 00:08:56.249 Range in us Cumulative IO count 00:08:56.249 5620.972 - 5646.178: 0.0058% ( 1) 00:08:56.249 5671.385 - 5696.591: 0.0174% ( 2) 00:08:56.249 5721.797 - 5747.003: 0.0231% ( 1) 00:08:56.249 5747.003 - 5772.209: 0.0347% ( 2) 00:08:56.249 5772.209 - 5797.415: 0.0405% ( 1) 00:08:56.249 5797.415 - 5822.622: 0.0521% ( 2) 00:08:56.249 5822.622 - 5847.828: 0.0579% ( 1) 00:08:56.249 5847.828 - 5873.034: 0.0810% ( 4) 00:08:56.249 5873.034 - 5898.240: 0.0868% ( 1) 00:08:56.249 5898.240 - 5923.446: 0.1389% ( 9) 00:08:56.249 5923.446 - 5948.652: 0.2546% ( 20) 00:08:56.249 5948.652 - 5973.858: 0.4051% ( 26) 00:08:56.249 5973.858 - 5999.065: 0.6076% ( 35) 00:08:56.249 5999.065 - 6024.271: 0.8449% ( 41) 00:08:56.249 6024.271 - 6049.477: 1.3368% ( 85) 00:08:56.249 6049.477 - 6074.683: 2.1007% ( 132) 00:08:56.249 6074.683 - 6099.889: 2.5694% ( 81) 00:08:56.249 6099.889 - 6125.095: 3.3796% ( 140) 00:08:56.249 6125.095 - 6150.302: 4.5139% ( 196) 00:08:56.249 6150.302 - 6175.508: 5.7755% ( 218) 00:08:56.249 6175.508 - 6200.714: 6.9965% ( 211) 00:08:56.249 6200.714 - 6225.920: 8.0729% ( 186) 00:08:56.249 6225.920 - 6251.126: 10.1968% ( 367) 00:08:56.249 6251.126 - 6276.332: 12.1817% ( 343) 00:08:56.249 6276.332 - 6301.538: 14.5660% ( 412) 00:08:56.249 6301.538 - 6326.745: 16.2037% ( 283) 00:08:56.249 6326.745 - 6351.951: 17.9051% ( 294) 00:08:56.249 6351.951 - 6377.157: 19.3981% ( 258) 00:08:56.249 6377.157 - 6402.363: 21.1458% ( 302) 00:08:56.249 6402.363 - 6427.569: 22.6794% ( 265) 00:08:56.249 6427.569 - 6452.775: 24.2940% ( 279) 00:08:56.249 6452.775 - 6503.188: 28.1076% ( 659) 00:08:56.249 6503.188 - 6553.600: 32.6273% ( 781) 00:08:56.249 6553.600 - 6604.012: 36.7014% ( 704) 00:08:56.249 6604.012 - 6654.425: 40.8796% ( 722) 00:08:56.249 6654.425 - 6704.837: 44.2650% ( 585) 00:08:56.249 6704.837 - 6755.249: 48.1076% ( 664) 00:08:56.249 6755.249 - 6805.662: 52.5984% ( 776) 00:08:56.249 6805.662 - 6856.074: 57.2627% ( 806) 00:08:56.249 6856.074 - 6906.486: 60.4167% ( 545) 00:08:56.249 6906.486 - 6956.898: 63.7616% ( 578) 00:08:56.249 6956.898 - 7007.311: 67.2627% ( 605) 00:08:56.249 7007.311 - 7057.723: 69.7801% ( 435) 00:08:56.249 7057.723 - 7108.135: 71.6493% ( 323) 00:08:56.249 7108.135 - 7158.548: 73.8600% ( 382) 00:08:56.249 7158.548 - 7208.960: 75.4919% ( 282) 00:08:56.249 7208.960 - 7259.372: 76.7014% ( 209) 00:08:56.249 7259.372 - 7309.785: 77.6794% ( 169) 00:08:56.249 7309.785 - 7360.197: 78.7269% ( 181) 00:08:56.249 7360.197 - 7410.609: 79.6007% ( 151) 00:08:56.249 7410.609 - 7461.022: 80.9259% ( 229) 00:08:56.249 7461.022 - 7511.434: 82.2801% ( 234) 00:08:56.249 7511.434 - 7561.846: 83.0845% ( 139) 00:08:56.250 7561.846 - 7612.258: 83.6169% ( 92) 00:08:56.250 7612.258 - 7662.671: 84.0799% ( 80) 00:08:56.250 7662.671 - 7713.083: 84.6701% ( 102) 00:08:56.250 7713.083 - 7763.495: 85.0000% ( 57) 00:08:56.250 7763.495 - 7813.908: 85.3646% ( 63) 00:08:56.250 7813.908 - 7864.320: 85.6134% ( 43) 00:08:56.250 7864.320 - 7914.732: 85.8391% ( 39) 00:08:56.250 7914.732 - 7965.145: 86.0590% ( 38) 00:08:56.250 7965.145 - 8015.557: 86.2963% ( 41) 00:08:56.250 8015.557 - 8065.969: 86.6609% ( 63) 00:08:56.250 8065.969 - 8116.382: 87.0891% ( 74) 00:08:56.250 8116.382 - 8166.794: 87.4421% ( 61) 00:08:56.250 8166.794 - 8217.206: 87.6042% ( 28) 00:08:56.250 8217.206 - 8267.618: 87.7778% ( 30) 00:08:56.250 8267.618 - 8318.031: 87.9456% ( 29) 00:08:56.250 8318.031 - 8368.443: 88.0787% ( 23) 00:08:56.250 8368.443 - 8418.855: 88.3449% ( 46) 00:08:56.250 8418.855 - 8469.268: 88.4722% ( 22) 00:08:56.250 8469.268 - 8519.680: 88.6111% ( 24) 00:08:56.250 8519.680 - 8570.092: 88.7731% ( 28) 00:08:56.250 8570.092 - 8620.505: 88.8773% ( 18) 00:08:56.250 8620.505 - 8670.917: 89.0856% ( 36) 00:08:56.250 8670.917 - 8721.329: 89.1956% ( 19) 00:08:56.250 8721.329 - 8771.742: 89.2766% ( 14) 00:08:56.250 8771.742 - 8822.154: 89.3519% ( 13) 00:08:56.250 8822.154 - 8872.566: 89.4213% ( 12) 00:08:56.250 8872.566 - 8922.978: 89.5139% ( 16) 00:08:56.250 8922.978 - 8973.391: 89.5833% ( 12) 00:08:56.250 8973.391 - 9023.803: 89.7106% ( 22) 00:08:56.250 9023.803 - 9074.215: 89.8611% ( 26) 00:08:56.250 9074.215 - 9124.628: 89.9653% ( 18) 00:08:56.250 9124.628 - 9175.040: 90.2257% ( 45) 00:08:56.250 9175.040 - 9225.452: 90.2951% ( 12) 00:08:56.250 9225.452 - 9275.865: 90.3646% ( 12) 00:08:56.250 9275.865 - 9326.277: 90.4688% ( 18) 00:08:56.250 9326.277 - 9376.689: 90.6250% ( 27) 00:08:56.250 9376.689 - 9427.102: 90.8218% ( 34) 00:08:56.250 9427.102 - 9477.514: 90.9201% ( 17) 00:08:56.250 9477.514 - 9527.926: 91.0185% ( 17) 00:08:56.250 9527.926 - 9578.338: 91.1400% ( 21) 00:08:56.250 9578.338 - 9628.751: 91.2963% ( 27) 00:08:56.250 9628.751 - 9679.163: 91.4005% ( 18) 00:08:56.250 9679.163 - 9729.575: 91.4873% ( 15) 00:08:56.250 9729.575 - 9779.988: 91.6088% ( 21) 00:08:56.250 9779.988 - 9830.400: 91.7593% ( 26) 00:08:56.250 9830.400 - 9880.812: 91.8519% ( 16) 00:08:56.250 9880.812 - 9931.225: 91.9387% ( 15) 00:08:56.250 9931.225 - 9981.637: 92.0255% ( 15) 00:08:56.250 9981.637 - 10032.049: 92.1123% ( 15) 00:08:56.250 10032.049 - 10082.462: 92.2106% ( 17) 00:08:56.250 10082.462 - 10132.874: 92.3264% ( 20) 00:08:56.250 10132.874 - 10183.286: 92.4653% ( 24) 00:08:56.250 10183.286 - 10233.698: 92.6100% ( 25) 00:08:56.250 10233.698 - 10284.111: 92.7315% ( 21) 00:08:56.250 10284.111 - 10334.523: 92.8762% ( 25) 00:08:56.250 10334.523 - 10384.935: 92.9919% ( 20) 00:08:56.250 10384.935 - 10435.348: 93.0729% ( 14) 00:08:56.250 10435.348 - 10485.760: 93.1655% ( 16) 00:08:56.250 10485.760 - 10536.172: 93.2697% ( 18) 00:08:56.250 10536.172 - 10586.585: 93.4144% ( 25) 00:08:56.250 10586.585 - 10636.997: 93.5301% ( 20) 00:08:56.250 10636.997 - 10687.409: 93.6458% ( 20) 00:08:56.250 10687.409 - 10737.822: 93.7384% ( 16) 00:08:56.250 10737.822 - 10788.234: 93.8252% ( 15) 00:08:56.250 10788.234 - 10838.646: 93.9410% ( 20) 00:08:56.250 10838.646 - 10889.058: 94.0220% ( 14) 00:08:56.250 10889.058 - 10939.471: 94.1146% ( 16) 00:08:56.250 10939.471 - 10989.883: 94.2188% ( 18) 00:08:56.250 10989.883 - 11040.295: 94.3113% ( 16) 00:08:56.250 11040.295 - 11090.708: 94.3750% ( 11) 00:08:56.250 11090.708 - 11141.120: 94.4271% ( 9) 00:08:56.250 11141.120 - 11191.532: 94.4618% ( 6) 00:08:56.250 11191.532 - 11241.945: 94.4965% ( 6) 00:08:56.250 11241.945 - 11292.357: 94.5197% ( 4) 00:08:56.250 11292.357 - 11342.769: 94.5428% ( 4) 00:08:56.250 11342.769 - 11393.182: 94.5544% ( 2) 00:08:56.250 11393.182 - 11443.594: 94.5660% ( 2) 00:08:56.250 11443.594 - 11494.006: 94.5949% ( 5) 00:08:56.250 11494.006 - 11544.418: 94.6238% ( 5) 00:08:56.250 11544.418 - 11594.831: 94.6528% ( 5) 00:08:56.250 11594.831 - 11645.243: 94.6817% ( 5) 00:08:56.250 11645.243 - 11695.655: 94.7164% ( 6) 00:08:56.250 11695.655 - 11746.068: 94.7627% ( 8) 00:08:56.250 11746.068 - 11796.480: 94.7917% ( 5) 00:08:56.250 11796.480 - 11846.892: 94.8090% ( 3) 00:08:56.250 11846.892 - 11897.305: 94.8264% ( 3) 00:08:56.250 11897.305 - 11947.717: 94.8553% ( 5) 00:08:56.250 11947.717 - 11998.129: 94.8843% ( 5) 00:08:56.250 11998.129 - 12048.542: 94.9132% ( 5) 00:08:56.250 12048.542 - 12098.954: 94.9306% ( 3) 00:08:56.250 12098.954 - 12149.366: 94.9653% ( 6) 00:08:56.250 12149.366 - 12199.778: 95.0058% ( 7) 00:08:56.250 12199.778 - 12250.191: 95.0579% ( 9) 00:08:56.250 12250.191 - 12300.603: 95.0984% ( 7) 00:08:56.250 12300.603 - 12351.015: 95.1447% ( 8) 00:08:56.250 12351.015 - 12401.428: 95.1968% ( 9) 00:08:56.250 12401.428 - 12451.840: 95.2431% ( 8) 00:08:56.250 12451.840 - 12502.252: 95.3009% ( 10) 00:08:56.250 12502.252 - 12552.665: 95.4051% ( 18) 00:08:56.250 12552.665 - 12603.077: 95.4861% ( 14) 00:08:56.250 12603.077 - 12653.489: 95.5845% ( 17) 00:08:56.250 12653.489 - 12703.902: 95.6713% ( 15) 00:08:56.250 12703.902 - 12754.314: 95.7697% ( 17) 00:08:56.250 12754.314 - 12804.726: 95.8681% ( 17) 00:08:56.250 12804.726 - 12855.138: 96.1921% ( 56) 00:08:56.250 12855.138 - 12905.551: 96.3252% ( 23) 00:08:56.250 12905.551 - 13006.375: 96.5104% ( 32) 00:08:56.250 13006.375 - 13107.200: 96.5972% ( 15) 00:08:56.250 13107.200 - 13208.025: 96.6782% ( 14) 00:08:56.250 13208.025 - 13308.849: 96.8403% ( 28) 00:08:56.250 13308.849 - 13409.674: 97.1933% ( 61) 00:08:56.250 13409.674 - 13510.498: 97.4653% ( 47) 00:08:56.250 13510.498 - 13611.323: 97.5868% ( 21) 00:08:56.250 13611.323 - 13712.148: 97.6620% ( 13) 00:08:56.250 13712.148 - 13812.972: 97.7373% ( 13) 00:08:56.250 13812.972 - 13913.797: 97.7546% ( 3) 00:08:56.250 13913.797 - 14014.622: 97.7662% ( 2) 00:08:56.250 14014.622 - 14115.446: 97.7778% ( 2) 00:08:56.250 14216.271 - 14317.095: 97.8009% ( 4) 00:08:56.250 14317.095 - 14417.920: 97.8472% ( 8) 00:08:56.250 14417.920 - 14518.745: 97.9514% ( 18) 00:08:56.250 14518.745 - 14619.569: 98.0845% ( 23) 00:08:56.250 14619.569 - 14720.394: 98.1539% ( 12) 00:08:56.250 14720.394 - 14821.218: 98.2234% ( 12) 00:08:56.250 14821.218 - 14922.043: 98.2928% ( 12) 00:08:56.250 14922.043 - 15022.868: 98.4259% ( 23) 00:08:56.250 15022.868 - 15123.692: 98.7384% ( 54) 00:08:56.250 15123.692 - 15224.517: 99.0683% ( 57) 00:08:56.250 15224.517 - 15325.342: 99.2188% ( 26) 00:08:56.250 15325.342 - 15426.166: 99.2419% ( 4) 00:08:56.250 15426.166 - 15526.991: 99.2535% ( 2) 00:08:56.250 15526.991 - 15627.815: 99.2593% ( 1) 00:08:56.250 18551.729 - 18652.554: 99.2708% ( 2) 00:08:56.250 18652.554 - 18753.378: 99.2940% ( 4) 00:08:56.250 18753.378 - 18854.203: 99.3113% ( 3) 00:08:56.250 18854.203 - 18955.028: 99.3345% ( 4) 00:08:56.250 18955.028 - 19055.852: 99.3576% ( 4) 00:08:56.250 19055.852 - 19156.677: 99.3750% ( 3) 00:08:56.250 19156.677 - 19257.502: 99.3981% ( 4) 00:08:56.250 19257.502 - 19358.326: 99.4213% ( 4) 00:08:56.250 19358.326 - 19459.151: 99.4387% ( 3) 00:08:56.250 19459.151 - 19559.975: 99.4560% ( 3) 00:08:56.250 19559.975 - 19660.800: 99.4792% ( 4) 00:08:56.250 19660.800 - 19761.625: 99.5023% ( 4) 00:08:56.250 19761.625 - 19862.449: 99.5197% ( 3) 00:08:56.250 19862.449 - 19963.274: 99.5428% ( 4) 00:08:56.250 19963.274 - 20064.098: 99.5602% ( 3) 00:08:56.250 20064.098 - 20164.923: 99.5833% ( 4) 00:08:56.250 20164.923 - 20265.748: 99.6065% ( 4) 00:08:56.250 20265.748 - 20366.572: 99.6238% ( 3) 00:08:56.250 20366.572 - 20467.397: 99.6296% ( 1) 00:08:56.250 23492.135 - 23592.960: 99.6528% ( 4) 00:08:56.250 23592.960 - 23693.785: 99.6701% ( 3) 00:08:56.250 23693.785 - 23794.609: 99.6933% ( 4) 00:08:56.250 23794.609 - 23895.434: 99.7164% ( 4) 00:08:56.250 23895.434 - 23996.258: 99.7338% ( 3) 00:08:56.250 23996.258 - 24097.083: 99.7569% ( 4) 00:08:56.250 24097.083 - 24197.908: 99.7801% ( 4) 00:08:56.250 24197.908 - 24298.732: 99.8032% ( 4) 00:08:56.250 24298.732 - 24399.557: 99.8206% ( 3) 00:08:56.250 24399.557 - 24500.382: 99.8438% ( 4) 00:08:56.250 24500.382 - 24601.206: 99.8669% ( 4) 00:08:56.250 24601.206 - 24702.031: 99.8843% ( 3) 00:08:56.250 24702.031 - 24802.855: 99.9074% ( 4) 00:08:56.250 24802.855 - 24903.680: 99.9248% ( 3) 00:08:56.250 24903.680 - 25004.505: 99.9479% ( 4) 00:08:56.250 25004.505 - 25105.329: 99.9711% ( 4) 00:08:56.250 25105.329 - 25206.154: 99.9942% ( 4) 00:08:56.250 25206.154 - 25306.978: 100.0000% ( 1) 00:08:56.250 00:08:56.250 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.250 ============================================================================== 00:08:56.250 Range in us Cumulative IO count 00:08:56.250 5620.972 - 5646.178: 0.0058% ( 1) 00:08:56.250 5671.385 - 5696.591: 0.0115% ( 1) 00:08:56.250 5696.591 - 5721.797: 0.0173% ( 1) 00:08:56.250 5747.003 - 5772.209: 0.0288% ( 2) 00:08:56.250 5797.415 - 5822.622: 0.0404% ( 2) 00:08:56.250 5822.622 - 5847.828: 0.0519% ( 2) 00:08:56.250 5847.828 - 5873.034: 0.0923% ( 7) 00:08:56.250 5873.034 - 5898.240: 0.1095% ( 3) 00:08:56.250 5898.240 - 5923.446: 0.1499% ( 7) 00:08:56.250 5923.446 - 5948.652: 0.2306% ( 14) 00:08:56.250 5948.652 - 5973.858: 0.2825% ( 9) 00:08:56.250 5973.858 - 5999.065: 0.4497% ( 29) 00:08:56.250 5999.065 - 6024.271: 0.6169% ( 29) 00:08:56.250 6024.271 - 6049.477: 0.8591% ( 42) 00:08:56.250 6049.477 - 6074.683: 1.4530% ( 103) 00:08:56.250 6074.683 - 6099.889: 2.3120% ( 149) 00:08:56.250 6099.889 - 6125.095: 2.8079% ( 86) 00:08:56.250 6125.095 - 6150.302: 3.3499% ( 94) 00:08:56.250 6150.302 - 6175.508: 4.2320% ( 153) 00:08:56.250 6175.508 - 6200.714: 5.8118% ( 274) 00:08:56.250 6200.714 - 6225.920: 7.3109% ( 260) 00:08:56.250 6225.920 - 6251.126: 9.2482% ( 336) 00:08:56.250 6251.126 - 6276.332: 11.3353% ( 362) 00:08:56.250 6276.332 - 6301.538: 12.8921% ( 270) 00:08:56.251 6301.538 - 6326.745: 15.3482% ( 426) 00:08:56.251 6326.745 - 6351.951: 17.1817% ( 318) 00:08:56.251 6351.951 - 6377.157: 18.3522% ( 203) 00:08:56.251 6377.157 - 6402.363: 19.7417% ( 241) 00:08:56.251 6402.363 - 6427.569: 22.0018% ( 392) 00:08:56.251 6427.569 - 6452.775: 23.5182% ( 263) 00:08:56.251 6452.775 - 6503.188: 27.1506% ( 630) 00:08:56.251 6503.188 - 6553.600: 31.1981% ( 702) 00:08:56.251 6553.600 - 6604.012: 36.0759% ( 846) 00:08:56.251 6604.012 - 6654.425: 40.1061% ( 699) 00:08:56.251 6654.425 - 6704.837: 43.9172% ( 661) 00:08:56.251 6704.837 - 6755.249: 48.0051% ( 709) 00:08:56.251 6755.249 - 6805.662: 52.2083% ( 729) 00:08:56.251 6805.662 - 6856.074: 55.8003% ( 623) 00:08:56.251 6856.074 - 6906.486: 60.1534% ( 755) 00:08:56.251 6906.486 - 6956.898: 63.7166% ( 618) 00:08:56.251 6956.898 - 7007.311: 66.9915% ( 568) 00:08:56.251 7007.311 - 7057.723: 68.6405% ( 286) 00:08:56.251 7057.723 - 7108.135: 70.9929% ( 408) 00:08:56.251 7108.135 - 7158.548: 73.0454% ( 356) 00:08:56.251 7158.548 - 7208.960: 74.7117% ( 289) 00:08:56.251 7208.960 - 7259.372: 76.2108% ( 260) 00:08:56.251 7259.372 - 7309.785: 77.4792% ( 220) 00:08:56.251 7309.785 - 7360.197: 78.4075% ( 161) 00:08:56.251 7360.197 - 7410.609: 79.8143% ( 244) 00:08:56.251 7410.609 - 7461.022: 81.5325% ( 298) 00:08:56.251 7461.022 - 7511.434: 82.2590% ( 126) 00:08:56.251 7511.434 - 7561.846: 83.1238% ( 150) 00:08:56.251 7561.846 - 7612.258: 84.0867% ( 167) 00:08:56.251 7612.258 - 7662.671: 84.9227% ( 145) 00:08:56.251 7662.671 - 7713.083: 85.4935% ( 99) 00:08:56.251 7713.083 - 7763.495: 85.7472% ( 44) 00:08:56.251 7763.495 - 7813.908: 86.0816% ( 58) 00:08:56.251 7813.908 - 7864.320: 86.2431% ( 28) 00:08:56.251 7864.320 - 7914.732: 86.3815% ( 24) 00:08:56.251 7914.732 - 7965.145: 86.5660% ( 32) 00:08:56.251 7965.145 - 8015.557: 86.8485% ( 49) 00:08:56.251 8015.557 - 8065.969: 87.1137% ( 46) 00:08:56.251 8065.969 - 8116.382: 87.3905% ( 48) 00:08:56.251 8116.382 - 8166.794: 87.5346% ( 25) 00:08:56.251 8166.794 - 8217.206: 87.7479% ( 37) 00:08:56.251 8217.206 - 8267.618: 88.0189% ( 47) 00:08:56.251 8267.618 - 8318.031: 88.2957% ( 48) 00:08:56.251 8318.031 - 8368.443: 88.5667% ( 47) 00:08:56.251 8368.443 - 8418.855: 88.6877% ( 21) 00:08:56.251 8418.855 - 8469.268: 88.7512% ( 11) 00:08:56.251 8469.268 - 8519.680: 88.8665% ( 20) 00:08:56.251 8519.680 - 8570.092: 88.9760% ( 19) 00:08:56.251 8570.092 - 8620.505: 89.1029% ( 22) 00:08:56.251 8620.505 - 8670.917: 89.2182% ( 20) 00:08:56.251 8670.917 - 8721.329: 89.4200% ( 35) 00:08:56.251 8721.329 - 8771.742: 89.5526% ( 23) 00:08:56.251 8771.742 - 8822.154: 89.6564% ( 18) 00:08:56.251 8822.154 - 8872.566: 89.7486% ( 16) 00:08:56.251 8872.566 - 8922.978: 89.8351% ( 15) 00:08:56.251 8922.978 - 8973.391: 89.9331% ( 17) 00:08:56.251 8973.391 - 9023.803: 90.0542% ( 21) 00:08:56.251 9023.803 - 9074.215: 90.1464% ( 16) 00:08:56.251 9074.215 - 9124.628: 90.3425% ( 34) 00:08:56.251 9124.628 - 9175.040: 90.5039% ( 28) 00:08:56.251 9175.040 - 9225.452: 90.5789% ( 13) 00:08:56.251 9225.452 - 9275.865: 90.7000% ( 21) 00:08:56.251 9275.865 - 9326.277: 90.8672% ( 29) 00:08:56.251 9326.277 - 9376.689: 91.0459% ( 31) 00:08:56.251 9376.689 - 9427.102: 91.1843% ( 24) 00:08:56.251 9427.102 - 9477.514: 91.2823% ( 17) 00:08:56.251 9477.514 - 9527.926: 91.3803% ( 17) 00:08:56.251 9527.926 - 9578.338: 91.5244% ( 25) 00:08:56.251 9578.338 - 9628.751: 91.6340% ( 19) 00:08:56.251 9628.751 - 9679.163: 91.7551% ( 21) 00:08:56.251 9679.163 - 9729.575: 92.0030% ( 43) 00:08:56.251 9729.575 - 9779.988: 92.1356% ( 23) 00:08:56.251 9779.988 - 9830.400: 92.2336% ( 17) 00:08:56.251 9830.400 - 9880.812: 92.2913% ( 10) 00:08:56.251 9880.812 - 9931.225: 92.3605% ( 12) 00:08:56.251 9931.225 - 9981.637: 92.4066% ( 8) 00:08:56.251 9981.637 - 10032.049: 92.4470% ( 7) 00:08:56.251 10032.049 - 10082.462: 92.4873% ( 7) 00:08:56.251 10082.462 - 10132.874: 92.5219% ( 6) 00:08:56.251 10132.874 - 10183.286: 92.5334% ( 2) 00:08:56.251 10183.286 - 10233.698: 92.5565% ( 4) 00:08:56.251 10233.698 - 10284.111: 92.5738% ( 3) 00:08:56.251 10284.111 - 10334.523: 92.6026% ( 5) 00:08:56.251 10334.523 - 10384.935: 92.6199% ( 3) 00:08:56.251 10384.935 - 10435.348: 92.6488% ( 5) 00:08:56.251 10435.348 - 10485.760: 92.6833% ( 6) 00:08:56.251 10485.760 - 10536.172: 92.7122% ( 5) 00:08:56.251 10536.172 - 10586.585: 92.7468% ( 6) 00:08:56.251 10586.585 - 10636.997: 92.7698% ( 4) 00:08:56.251 10636.997 - 10687.409: 92.7871% ( 3) 00:08:56.251 10687.409 - 10737.822: 92.8160% ( 5) 00:08:56.251 10737.822 - 10788.234: 92.8333% ( 3) 00:08:56.251 10788.234 - 10838.646: 92.8621% ( 5) 00:08:56.251 10838.646 - 10889.058: 92.9313% ( 12) 00:08:56.251 10889.058 - 10939.471: 92.9889% ( 10) 00:08:56.251 10939.471 - 10989.883: 93.0581% ( 12) 00:08:56.251 10989.883 - 11040.295: 93.1619% ( 18) 00:08:56.251 11040.295 - 11090.708: 93.2369% ( 13) 00:08:56.251 11090.708 - 11141.120: 93.3579% ( 21) 00:08:56.251 11141.120 - 11191.532: 93.4675% ( 19) 00:08:56.251 11191.532 - 11241.945: 93.5309% ( 11) 00:08:56.251 11241.945 - 11292.357: 93.6405% ( 19) 00:08:56.251 11292.357 - 11342.769: 93.7500% ( 19) 00:08:56.251 11342.769 - 11393.182: 93.8711% ( 21) 00:08:56.251 11393.182 - 11443.594: 93.9576% ( 15) 00:08:56.251 11443.594 - 11494.006: 94.0383% ( 14) 00:08:56.251 11494.006 - 11544.418: 94.1305% ( 16) 00:08:56.251 11544.418 - 11594.831: 94.2286% ( 17) 00:08:56.251 11594.831 - 11645.243: 94.3323% ( 18) 00:08:56.251 11645.243 - 11695.655: 94.4361% ( 18) 00:08:56.251 11695.655 - 11746.068: 94.5341% ( 17) 00:08:56.251 11746.068 - 11796.480: 94.6494% ( 20) 00:08:56.251 11796.480 - 11846.892: 94.7705% ( 21) 00:08:56.251 11846.892 - 11897.305: 94.8743% ( 18) 00:08:56.251 11897.305 - 11947.717: 94.9954% ( 21) 00:08:56.251 11947.717 - 11998.129: 95.1395% ( 25) 00:08:56.251 11998.129 - 12048.542: 95.2894% ( 26) 00:08:56.251 12048.542 - 12098.954: 95.4393% ( 26) 00:08:56.251 12098.954 - 12149.366: 95.5143% ( 13) 00:08:56.251 12149.366 - 12199.778: 95.5835% ( 12) 00:08:56.251 12199.778 - 12250.191: 95.6584% ( 13) 00:08:56.251 12250.191 - 12300.603: 95.7392% ( 14) 00:08:56.251 12300.603 - 12351.015: 95.7853% ( 8) 00:08:56.251 12351.015 - 12401.428: 95.8372% ( 9) 00:08:56.251 12401.428 - 12451.840: 95.9006% ( 11) 00:08:56.251 12451.840 - 12502.252: 95.9871% ( 15) 00:08:56.251 12502.252 - 12552.665: 96.0505% ( 11) 00:08:56.251 12552.665 - 12603.077: 96.1370% ( 15) 00:08:56.251 12603.077 - 12653.489: 96.2408% ( 18) 00:08:56.251 12653.489 - 12703.902: 96.3619% ( 21) 00:08:56.251 12703.902 - 12754.314: 96.4887% ( 22) 00:08:56.251 12754.314 - 12804.726: 96.5637% ( 13) 00:08:56.251 12804.726 - 12855.138: 96.6040% ( 7) 00:08:56.251 12855.138 - 12905.551: 96.6386% ( 6) 00:08:56.251 12905.551 - 13006.375: 96.6963% ( 10) 00:08:56.251 13006.375 - 13107.200: 96.7539% ( 10) 00:08:56.251 13107.200 - 13208.025: 96.8635% ( 19) 00:08:56.251 13208.025 - 13308.849: 97.0249% ( 28) 00:08:56.251 13308.849 - 13409.674: 97.2959% ( 47) 00:08:56.251 13409.674 - 13510.498: 97.6072% ( 54) 00:08:56.251 13510.498 - 13611.323: 97.7456% ( 24) 00:08:56.251 13611.323 - 13712.148: 97.8379% ( 16) 00:08:56.251 13712.148 - 13812.972: 97.9128% ( 13) 00:08:56.251 13812.972 - 13913.797: 97.9762% ( 11) 00:08:56.251 13913.797 - 14014.622: 98.0454% ( 12) 00:08:56.251 14014.622 - 14115.446: 98.1319% ( 15) 00:08:56.251 14115.446 - 14216.271: 98.2357% ( 18) 00:08:56.251 14216.271 - 14317.095: 98.3452% ( 19) 00:08:56.251 14317.095 - 14417.920: 98.4490% ( 18) 00:08:56.251 14417.920 - 14518.745: 98.5874% ( 24) 00:08:56.251 14518.745 - 14619.569: 98.7143% ( 22) 00:08:56.251 14619.569 - 14720.394: 98.7834% ( 12) 00:08:56.251 14720.394 - 14821.218: 98.8180% ( 6) 00:08:56.251 14821.218 - 14922.043: 98.8988% ( 14) 00:08:56.251 14922.043 - 15022.868: 99.1870% ( 50) 00:08:56.251 15022.868 - 15123.692: 99.2447% ( 10) 00:08:56.251 15123.692 - 15224.517: 99.3312% ( 15) 00:08:56.251 15224.517 - 15325.342: 99.5618% ( 40) 00:08:56.251 15325.342 - 15426.166: 99.6022% ( 7) 00:08:56.251 15426.166 - 15526.991: 99.6310% ( 5) 00:08:56.251 17946.782 - 18047.606: 99.6425% ( 2) 00:08:56.251 18047.606 - 18148.431: 99.6656% ( 4) 00:08:56.251 18148.431 - 18249.255: 99.6887% ( 4) 00:08:56.251 18249.255 - 18350.080: 99.7060% ( 3) 00:08:56.251 18350.080 - 18450.905: 99.7290% ( 4) 00:08:56.251 18450.905 - 18551.729: 99.7521% ( 4) 00:08:56.251 18551.729 - 18652.554: 99.7694% ( 3) 00:08:56.251 18652.554 - 18753.378: 99.7924% ( 4) 00:08:56.251 18753.378 - 18854.203: 99.8155% ( 4) 00:08:56.251 18854.203 - 18955.028: 99.8328% ( 3) 00:08:56.251 18955.028 - 19055.852: 99.8559% ( 4) 00:08:56.251 19055.852 - 19156.677: 99.8732% ( 3) 00:08:56.251 19156.677 - 19257.502: 99.8962% ( 4) 00:08:56.251 19257.502 - 19358.326: 99.9193% ( 4) 00:08:56.251 19358.326 - 19459.151: 99.9366% ( 3) 00:08:56.251 19459.151 - 19559.975: 99.9596% ( 4) 00:08:56.251 19559.975 - 19660.800: 99.9827% ( 4) 00:08:56.251 19660.800 - 19761.625: 100.0000% ( 3) 00:08:56.251 00:08:56.509 14:41:34 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:56.509 00:08:56.509 real 0m2.506s 00:08:56.509 user 0m2.213s 00:08:56.509 sys 0m0.197s 00:08:56.509 14:41:34 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.509 14:41:34 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:56.509 ************************************ 00:08:56.509 END TEST nvme_perf 00:08:56.509 ************************************ 00:08:56.509 14:41:34 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:56.509 14:41:34 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:56.509 14:41:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.510 14:41:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.510 ************************************ 00:08:56.510 START TEST nvme_hello_world 00:08:56.510 ************************************ 00:08:56.510 14:41:34 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:56.510 Initializing NVMe Controllers 00:08:56.510 Attached to 0000:00:10.0 00:08:56.510 Namespace ID: 1 size: 6GB 00:08:56.510 Attached to 0000:00:11.0 00:08:56.510 Namespace ID: 1 size: 5GB 00:08:56.510 Attached to 0000:00:13.0 00:08:56.510 Namespace ID: 1 size: 1GB 00:08:56.510 Attached to 0000:00:12.0 00:08:56.510 Namespace ID: 1 size: 4GB 00:08:56.510 Namespace ID: 2 size: 4GB 00:08:56.510 Namespace ID: 3 size: 4GB 00:08:56.510 Initialization complete. 00:08:56.510 INFO: using host memory buffer for IO 00:08:56.510 Hello world! 00:08:56.510 INFO: using host memory buffer for IO 00:08:56.510 Hello world! 00:08:56.510 INFO: using host memory buffer for IO 00:08:56.510 Hello world! 00:08:56.510 INFO: using host memory buffer for IO 00:08:56.510 Hello world! 00:08:56.510 INFO: using host memory buffer for IO 00:08:56.510 Hello world! 00:08:56.510 INFO: using host memory buffer for IO 00:08:56.510 Hello world! 00:08:56.768 ************************************ 00:08:56.768 END TEST nvme_hello_world 00:08:56.768 ************************************ 00:08:56.768 00:08:56.768 real 0m0.219s 00:08:56.768 user 0m0.085s 00:08:56.768 sys 0m0.096s 00:08:56.768 14:41:34 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.768 14:41:34 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:56.768 14:41:34 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:56.768 14:41:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.768 14:41:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.768 14:41:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.768 ************************************ 00:08:56.768 START TEST nvme_sgl 00:08:56.768 ************************************ 00:08:56.768 14:41:34 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:56.768 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:56.768 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:56.768 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:56.768 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:56.768 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:56.768 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:56.768 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:56.768 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:57.027 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:57.027 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:57.027 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:57.027 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:57.027 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:57.027 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:57.027 NVMe Readv/Writev Request test 00:08:57.027 Attached to 0000:00:10.0 00:08:57.027 Attached to 0000:00:11.0 00:08:57.027 Attached to 0000:00:13.0 00:08:57.027 Attached to 0000:00:12.0 00:08:57.027 0000:00:10.0: build_io_request_2 test passed 00:08:57.027 0000:00:10.0: build_io_request_4 test passed 00:08:57.027 0000:00:10.0: build_io_request_5 test passed 00:08:57.027 0000:00:10.0: build_io_request_6 test passed 00:08:57.027 0000:00:10.0: build_io_request_7 test passed 00:08:57.027 0000:00:10.0: build_io_request_10 test passed 00:08:57.027 0000:00:11.0: build_io_request_2 test passed 00:08:57.027 0000:00:11.0: build_io_request_4 test passed 00:08:57.027 0000:00:11.0: build_io_request_5 test passed 00:08:57.027 0000:00:11.0: build_io_request_6 test passed 00:08:57.027 0000:00:11.0: build_io_request_7 test passed 00:08:57.027 0000:00:11.0: build_io_request_10 test passed 00:08:57.027 Cleaning up... 00:08:57.027 00:08:57.027 real 0m0.288s 00:08:57.027 user 0m0.151s 00:08:57.027 sys 0m0.091s 00:08:57.027 14:41:34 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.027 14:41:34 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:57.027 ************************************ 00:08:57.027 END TEST nvme_sgl 00:08:57.027 ************************************ 00:08:57.027 14:41:34 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:57.027 14:41:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.027 14:41:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.028 14:41:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.028 ************************************ 00:08:57.028 START TEST nvme_e2edp 00:08:57.028 ************************************ 00:08:57.028 14:41:34 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:57.285 NVMe Write/Read with End-to-End data protection test 00:08:57.285 Attached to 0000:00:10.0 00:08:57.285 Attached to 0000:00:11.0 00:08:57.285 Attached to 0000:00:13.0 00:08:57.285 Attached to 0000:00:12.0 00:08:57.285 Cleaning up... 00:08:57.285 ************************************ 00:08:57.285 END TEST nvme_e2edp 00:08:57.285 ************************************ 00:08:57.285 00:08:57.285 real 0m0.223s 00:08:57.285 user 0m0.069s 00:08:57.285 sys 0m0.110s 00:08:57.285 14:41:35 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.285 14:41:35 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:57.285 14:41:35 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:57.285 14:41:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.285 14:41:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.285 14:41:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.285 ************************************ 00:08:57.285 START TEST nvme_reserve 00:08:57.285 ************************************ 00:08:57.285 14:41:35 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:57.543 ===================================================== 00:08:57.543 NVMe Controller at PCI bus 0, device 16, function 0 00:08:57.543 ===================================================== 00:08:57.543 Reservations: Not Supported 00:08:57.543 ===================================================== 00:08:57.543 NVMe Controller at PCI bus 0, device 17, function 0 00:08:57.543 ===================================================== 00:08:57.543 Reservations: Not Supported 00:08:57.543 ===================================================== 00:08:57.543 NVMe Controller at PCI bus 0, device 19, function 0 00:08:57.543 ===================================================== 00:08:57.543 Reservations: Not Supported 00:08:57.543 ===================================================== 00:08:57.543 NVMe Controller at PCI bus 0, device 18, function 0 00:08:57.543 ===================================================== 00:08:57.543 Reservations: Not Supported 00:08:57.543 Reservation test passed 00:08:57.543 00:08:57.543 real 0m0.220s 00:08:57.543 user 0m0.078s 00:08:57.543 sys 0m0.099s 00:08:57.543 14:41:35 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.543 14:41:35 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:57.543 ************************************ 00:08:57.543 END TEST nvme_reserve 00:08:57.543 ************************************ 00:08:57.543 14:41:35 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:57.543 14:41:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.543 14:41:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.543 14:41:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.543 ************************************ 00:08:57.543 START TEST nvme_err_injection 00:08:57.543 ************************************ 00:08:57.543 14:41:35 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:57.801 NVMe Error Injection test 00:08:57.801 Attached to 0000:00:10.0 00:08:57.801 Attached to 0000:00:11.0 00:08:57.801 Attached to 0000:00:13.0 00:08:57.801 Attached to 0000:00:12.0 00:08:57.801 0000:00:10.0: get features failed as expected 00:08:57.801 0000:00:11.0: get features failed as expected 00:08:57.801 0000:00:13.0: get features failed as expected 00:08:57.801 0000:00:12.0: get features failed as expected 00:08:57.801 0000:00:10.0: get features successfully as expected 00:08:57.801 0000:00:11.0: get features successfully as expected 00:08:57.801 0000:00:13.0: get features successfully as expected 00:08:57.801 0000:00:12.0: get features successfully as expected 00:08:57.801 0000:00:10.0: read failed as expected 00:08:57.801 0000:00:12.0: read failed as expected 00:08:57.801 0000:00:11.0: read failed as expected 00:08:57.801 0000:00:13.0: read failed as expected 00:08:57.801 0000:00:10.0: read successfully as expected 00:08:57.801 0000:00:11.0: read successfully as expected 00:08:57.801 0000:00:13.0: read successfully as expected 00:08:57.801 0000:00:12.0: read successfully as expected 00:08:57.801 Cleaning up... 00:08:57.801 00:08:57.801 real 0m0.229s 00:08:57.801 user 0m0.091s 00:08:57.801 sys 0m0.094s 00:08:57.801 14:41:35 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.801 ************************************ 00:08:57.801 END TEST nvme_err_injection 00:08:57.801 ************************************ 00:08:57.801 14:41:35 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:57.801 14:41:35 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:57.801 14:41:35 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:57.801 14:41:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.801 14:41:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.801 ************************************ 00:08:57.801 START TEST nvme_overhead 00:08:57.801 ************************************ 00:08:57.801 14:41:35 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:59.178 Initializing NVMe Controllers 00:08:59.178 Attached to 0000:00:10.0 00:08:59.178 Attached to 0000:00:11.0 00:08:59.178 Attached to 0000:00:13.0 00:08:59.178 Attached to 0000:00:12.0 00:08:59.178 Initialization complete. Launching workers. 00:08:59.178 submit (in ns) avg, min, max = 12319.6, 11025.4, 65474.6 00:08:59.178 complete (in ns) avg, min, max = 8209.5, 7856.2, 75633.8 00:08:59.178 00:08:59.178 Submit histogram 00:08:59.178 ================ 00:08:59.178 Range in us Cumulative Count 00:08:59.178 10.978 - 11.028: 0.0063% ( 1) 00:08:59.178 11.323 - 11.372: 0.0126% ( 1) 00:08:59.178 11.668 - 11.717: 0.0190% ( 1) 00:08:59.178 11.717 - 11.766: 0.0569% ( 6) 00:08:59.178 11.766 - 11.815: 0.2150% ( 25) 00:08:59.178 11.815 - 11.865: 0.7210% ( 80) 00:08:59.178 11.865 - 11.914: 2.5360% ( 287) 00:08:59.178 11.914 - 11.963: 7.8421% ( 839) 00:08:59.178 11.963 - 12.012: 16.9808% ( 1445) 00:08:59.178 12.012 - 12.062: 29.2499% ( 1940) 00:08:59.178 12.062 - 12.111: 43.4164% ( 2240) 00:08:59.178 12.111 - 12.160: 56.1093% ( 2007) 00:08:59.178 12.160 - 12.209: 66.1460% ( 1587) 00:08:59.178 12.209 - 12.258: 73.6845% ( 1192) 00:08:59.178 12.258 - 12.308: 79.0602% ( 850) 00:08:59.178 12.308 - 12.357: 83.1710% ( 650) 00:08:59.178 12.357 - 12.406: 86.8201% ( 577) 00:08:59.178 12.406 - 12.455: 89.4574% ( 417) 00:08:59.178 12.455 - 12.505: 91.6709% ( 350) 00:08:59.178 12.505 - 12.554: 93.5176% ( 292) 00:08:59.178 12.554 - 12.603: 94.5484% ( 163) 00:08:59.178 12.603 - 12.702: 95.6868% ( 180) 00:08:59.178 12.702 - 12.800: 96.3066% ( 98) 00:08:59.178 12.800 - 12.898: 96.6924% ( 61) 00:08:59.178 12.898 - 12.997: 96.8505% ( 25) 00:08:59.178 12.997 - 13.095: 96.9517% ( 16) 00:08:59.178 13.095 - 13.194: 97.0149% ( 10) 00:08:59.178 13.194 - 13.292: 97.0402% ( 4) 00:08:59.178 13.292 - 13.391: 97.0529% ( 2) 00:08:59.178 13.391 - 13.489: 97.0908% ( 6) 00:08:59.178 13.489 - 13.588: 97.0971% ( 1) 00:08:59.178 13.588 - 13.686: 97.1098% ( 2) 00:08:59.178 13.686 - 13.785: 97.1224% ( 2) 00:08:59.178 13.883 - 13.982: 97.1477% ( 4) 00:08:59.178 13.982 - 14.080: 97.1667% ( 3) 00:08:59.178 14.080 - 14.178: 97.2236% ( 9) 00:08:59.178 14.178 - 14.277: 97.2995% ( 12) 00:08:59.178 14.277 - 14.375: 97.3628% ( 10) 00:08:59.178 14.375 - 14.474: 97.4829% ( 19) 00:08:59.178 14.474 - 14.572: 97.6031% ( 19) 00:08:59.178 14.572 - 14.671: 97.7043% ( 16) 00:08:59.178 14.671 - 14.769: 97.7612% ( 9) 00:08:59.178 14.769 - 14.868: 97.8561% ( 15) 00:08:59.178 14.868 - 14.966: 97.9762% ( 19) 00:08:59.178 14.966 - 15.065: 98.0268% ( 8) 00:08:59.179 15.065 - 15.163: 98.0584% ( 5) 00:08:59.179 15.163 - 15.262: 98.0901% ( 5) 00:08:59.179 15.262 - 15.360: 98.1090% ( 3) 00:08:59.179 15.360 - 15.458: 98.1154% ( 1) 00:08:59.179 15.458 - 15.557: 98.1280% ( 2) 00:08:59.179 15.557 - 15.655: 98.1407% ( 2) 00:08:59.179 15.655 - 15.754: 98.1533% ( 2) 00:08:59.179 15.754 - 15.852: 98.1596% ( 1) 00:08:59.179 15.852 - 15.951: 98.1849% ( 4) 00:08:59.179 15.951 - 16.049: 98.2165% ( 5) 00:08:59.179 16.049 - 16.148: 98.2292% ( 2) 00:08:59.179 16.148 - 16.246: 98.2545% ( 4) 00:08:59.179 16.246 - 16.345: 98.2671% ( 2) 00:08:59.179 16.345 - 16.443: 98.3051% ( 6) 00:08:59.179 16.443 - 16.542: 98.3304% ( 4) 00:08:59.179 16.542 - 16.640: 98.3430% ( 2) 00:08:59.179 16.640 - 16.738: 98.3683% ( 4) 00:08:59.179 16.738 - 16.837: 98.3999% ( 5) 00:08:59.179 16.837 - 16.935: 98.4252% ( 4) 00:08:59.179 16.935 - 17.034: 98.4442% ( 3) 00:08:59.179 17.034 - 17.132: 98.4632% ( 3) 00:08:59.179 17.132 - 17.231: 98.4885% ( 4) 00:08:59.179 17.231 - 17.329: 98.5138% ( 4) 00:08:59.179 17.329 - 17.428: 98.5328% ( 3) 00:08:59.179 17.526 - 17.625: 98.5581% ( 4) 00:08:59.179 17.625 - 17.723: 98.5960% ( 6) 00:08:59.179 17.723 - 17.822: 98.6339% ( 6) 00:08:59.179 17.822 - 17.920: 98.7035% ( 11) 00:08:59.179 17.920 - 18.018: 98.7984% ( 15) 00:08:59.179 18.018 - 18.117: 98.8869% ( 14) 00:08:59.179 18.117 - 18.215: 98.9881% ( 16) 00:08:59.179 18.215 - 18.314: 99.0767% ( 14) 00:08:59.179 18.314 - 18.412: 99.1652% ( 14) 00:08:59.179 18.412 - 18.511: 99.2348% ( 11) 00:08:59.179 18.511 - 18.609: 99.3296% ( 15) 00:08:59.179 18.609 - 18.708: 99.4308% ( 16) 00:08:59.179 18.708 - 18.806: 99.4688% ( 6) 00:08:59.179 18.806 - 18.905: 99.5257% ( 9) 00:08:59.179 18.905 - 19.003: 99.5699% ( 7) 00:08:59.179 19.003 - 19.102: 99.6016% ( 5) 00:08:59.179 19.102 - 19.200: 99.6332% ( 5) 00:08:59.179 19.200 - 19.298: 99.6711% ( 6) 00:08:59.179 19.298 - 19.397: 99.7091% ( 6) 00:08:59.179 19.397 - 19.495: 99.7217% ( 2) 00:08:59.179 19.495 - 19.594: 99.7407% ( 3) 00:08:59.179 19.594 - 19.692: 99.7534% ( 2) 00:08:59.179 19.692 - 19.791: 99.7597% ( 1) 00:08:59.179 19.791 - 19.889: 99.7660% ( 1) 00:08:59.179 19.889 - 19.988: 99.7786% ( 2) 00:08:59.179 19.988 - 20.086: 99.7850% ( 1) 00:08:59.179 20.185 - 20.283: 99.7913% ( 1) 00:08:59.179 20.480 - 20.578: 99.7976% ( 1) 00:08:59.179 20.578 - 20.677: 99.8039% ( 1) 00:08:59.179 20.972 - 21.071: 99.8103% ( 1) 00:08:59.179 21.071 - 21.169: 99.8166% ( 1) 00:08:59.179 21.169 - 21.268: 99.8292% ( 2) 00:08:59.179 21.268 - 21.366: 99.8356% ( 1) 00:08:59.179 21.465 - 21.563: 99.8482% ( 2) 00:08:59.179 21.858 - 21.957: 99.8545% ( 1) 00:08:59.179 22.055 - 22.154: 99.8609% ( 1) 00:08:59.179 22.252 - 22.351: 99.8735% ( 2) 00:08:59.179 22.351 - 22.449: 99.8862% ( 2) 00:08:59.179 22.449 - 22.548: 99.8925% ( 1) 00:08:59.179 22.548 - 22.646: 99.8988% ( 1) 00:08:59.179 22.646 - 22.745: 99.9051% ( 1) 00:08:59.179 22.745 - 22.843: 99.9241% ( 3) 00:08:59.179 23.138 - 23.237: 99.9304% ( 1) 00:08:59.179 23.237 - 23.335: 99.9368% ( 1) 00:08:59.179 23.335 - 23.434: 99.9431% ( 1) 00:08:59.179 23.729 - 23.828: 99.9494% ( 1) 00:08:59.179 23.828 - 23.926: 99.9557% ( 1) 00:08:59.179 27.372 - 27.569: 99.9621% ( 1) 00:08:59.179 27.963 - 28.160: 99.9684% ( 1) 00:08:59.179 30.720 - 30.917: 99.9747% ( 1) 00:08:59.179 35.052 - 35.249: 99.9810% ( 1) 00:08:59.179 55.926 - 56.320: 99.9874% ( 1) 00:08:59.179 61.440 - 61.834: 99.9937% ( 1) 00:08:59.179 65.378 - 65.772: 100.0000% ( 1) 00:08:59.179 00:08:59.179 Complete histogram 00:08:59.179 ================== 00:08:59.179 Range in us Cumulative Count 00:08:59.179 7.828 - 7.877: 0.0190% ( 3) 00:08:59.179 7.877 - 7.926: 0.6767% ( 104) 00:08:59.179 7.926 - 7.975: 9.4865% ( 1393) 00:08:59.179 7.975 - 8.025: 31.2105% ( 3435) 00:08:59.179 8.025 - 8.074: 52.9408% ( 3436) 00:08:59.179 8.074 - 8.123: 72.1730% ( 3041) 00:08:59.179 8.123 - 8.172: 83.2153% ( 1746) 00:08:59.179 8.172 - 8.222: 89.5016% ( 994) 00:08:59.179 8.222 - 8.271: 92.6954% ( 505) 00:08:59.179 8.271 - 8.320: 94.5168% ( 288) 00:08:59.179 8.320 - 8.369: 95.8829% ( 216) 00:08:59.179 8.369 - 8.418: 96.8189% ( 148) 00:08:59.179 8.418 - 8.468: 97.4323% ( 97) 00:08:59.179 8.468 - 8.517: 97.7422% ( 49) 00:08:59.179 8.517 - 8.566: 97.9446% ( 32) 00:08:59.179 8.566 - 8.615: 98.0648% ( 19) 00:08:59.179 8.615 - 8.665: 98.1470% ( 13) 00:08:59.179 8.665 - 8.714: 98.1976% ( 8) 00:08:59.179 8.714 - 8.763: 98.2418% ( 7) 00:08:59.179 8.763 - 8.812: 98.2735% ( 5) 00:08:59.179 8.812 - 8.862: 98.2924% ( 3) 00:08:59.179 8.911 - 8.960: 98.3051% ( 2) 00:08:59.179 9.108 - 9.157: 98.3114% ( 1) 00:08:59.179 9.206 - 9.255: 98.3177% ( 1) 00:08:59.179 9.255 - 9.305: 98.3241% ( 1) 00:08:59.179 9.452 - 9.502: 98.3304% ( 1) 00:08:59.179 9.502 - 9.551: 98.3430% ( 2) 00:08:59.179 9.600 - 9.649: 98.3557% ( 2) 00:08:59.179 9.649 - 9.698: 98.3620% ( 1) 00:08:59.179 9.698 - 9.748: 98.3747% ( 2) 00:08:59.179 9.797 - 9.846: 98.3810% ( 1) 00:08:59.179 9.846 - 9.895: 98.3873% ( 1) 00:08:59.179 9.895 - 9.945: 98.3999% ( 2) 00:08:59.179 9.945 - 9.994: 98.4063% ( 1) 00:08:59.179 10.043 - 10.092: 98.4189% ( 2) 00:08:59.179 10.092 - 10.142: 98.4316% ( 2) 00:08:59.179 10.240 - 10.289: 98.4379% ( 1) 00:08:59.179 10.338 - 10.388: 98.4505% ( 2) 00:08:59.179 10.388 - 10.437: 98.4569% ( 1) 00:08:59.179 11.372 - 11.422: 98.4632% ( 1) 00:08:59.179 11.520 - 11.569: 98.4758% ( 2) 00:08:59.179 11.766 - 11.815: 98.4822% ( 1) 00:08:59.179 12.160 - 12.209: 98.4885% ( 1) 00:08:59.179 12.357 - 12.406: 98.4948% ( 1) 00:08:59.179 12.505 - 12.554: 98.5011% ( 1) 00:08:59.179 12.702 - 12.800: 98.5138% ( 2) 00:08:59.179 12.800 - 12.898: 98.5201% ( 1) 00:08:59.179 12.898 - 12.997: 98.5264% ( 1) 00:08:59.179 12.997 - 13.095: 98.5328% ( 1) 00:08:59.179 13.095 - 13.194: 98.5391% ( 1) 00:08:59.179 13.489 - 13.588: 98.5517% ( 2) 00:08:59.179 13.588 - 13.686: 98.5897% ( 6) 00:08:59.179 13.686 - 13.785: 98.6592% ( 11) 00:08:59.179 13.785 - 13.883: 98.7225% ( 10) 00:08:59.179 13.883 - 13.982: 98.7351% ( 2) 00:08:59.179 13.982 - 14.080: 98.7794% ( 7) 00:08:59.179 14.080 - 14.178: 98.8300% ( 8) 00:08:59.179 14.178 - 14.277: 98.8932% ( 10) 00:08:59.179 14.277 - 14.375: 98.9565% ( 10) 00:08:59.179 14.375 - 14.474: 99.0640% ( 17) 00:08:59.179 14.474 - 14.572: 99.1336% ( 11) 00:08:59.179 14.572 - 14.671: 99.1652% ( 5) 00:08:59.179 14.671 - 14.769: 99.2411% ( 12) 00:08:59.179 14.769 - 14.868: 99.2790% ( 6) 00:08:59.179 14.868 - 14.966: 99.3612% ( 13) 00:08:59.179 14.966 - 15.065: 99.4118% ( 8) 00:08:59.179 15.065 - 15.163: 99.4624% ( 8) 00:08:59.179 15.163 - 15.262: 99.5194% ( 9) 00:08:59.179 15.262 - 15.360: 99.5636% ( 7) 00:08:59.179 15.360 - 15.458: 99.6079% ( 7) 00:08:59.179 15.458 - 15.557: 99.6522% ( 7) 00:08:59.179 15.557 - 15.655: 99.6838% ( 5) 00:08:59.179 15.655 - 15.754: 99.6964% ( 2) 00:08:59.179 15.754 - 15.852: 99.7091% ( 2) 00:08:59.179 15.852 - 15.951: 99.7281% ( 3) 00:08:59.179 16.049 - 16.148: 99.7534% ( 4) 00:08:59.179 16.148 - 16.246: 99.7597% ( 1) 00:08:59.179 16.246 - 16.345: 99.7723% ( 2) 00:08:59.179 16.345 - 16.443: 99.7913% ( 3) 00:08:59.179 16.542 - 16.640: 99.8103% ( 3) 00:08:59.179 16.738 - 16.837: 99.8166% ( 1) 00:08:59.179 16.837 - 16.935: 99.8229% ( 1) 00:08:59.179 16.935 - 17.034: 99.8292% ( 1) 00:08:59.179 17.231 - 17.329: 99.8356% ( 1) 00:08:59.179 17.329 - 17.428: 99.8419% ( 1) 00:08:59.179 17.428 - 17.526: 99.8545% ( 2) 00:08:59.179 17.625 - 17.723: 99.8672% ( 2) 00:08:59.179 17.920 - 18.018: 99.8735% ( 1) 00:08:59.179 18.018 - 18.117: 99.8862% ( 2) 00:08:59.179 18.314 - 18.412: 99.8988% ( 2) 00:08:59.179 18.412 - 18.511: 99.9051% ( 1) 00:08:59.179 18.708 - 18.806: 99.9115% ( 1) 00:08:59.179 19.102 - 19.200: 99.9178% ( 1) 00:08:59.179 19.200 - 19.298: 99.9241% ( 1) 00:08:59.179 19.298 - 19.397: 99.9368% ( 2) 00:08:59.179 19.889 - 19.988: 99.9431% ( 1) 00:08:59.179 20.283 - 20.382: 99.9494% ( 1) 00:08:59.179 20.775 - 20.874: 99.9557% ( 1) 00:08:59.179 20.972 - 21.071: 99.9621% ( 1) 00:08:59.179 38.203 - 38.400: 99.9684% ( 1) 00:08:59.179 54.745 - 55.138: 99.9810% ( 2) 00:08:59.179 55.926 - 56.320: 99.9874% ( 1) 00:08:59.179 72.074 - 72.468: 99.9937% ( 1) 00:08:59.179 75.618 - 76.012: 100.0000% ( 1) 00:08:59.179 00:08:59.179 ************************************ 00:08:59.179 END TEST nvme_overhead 00:08:59.179 ************************************ 00:08:59.179 00:08:59.179 real 0m1.217s 00:08:59.179 user 0m1.083s 00:08:59.179 sys 0m0.087s 00:08:59.179 14:41:37 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.179 14:41:37 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:59.179 14:41:37 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:59.180 14:41:37 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:59.180 14:41:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.180 14:41:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.180 ************************************ 00:08:59.180 START TEST nvme_arbitration 00:08:59.180 ************************************ 00:08:59.180 14:41:37 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:02.480 Initializing NVMe Controllers 00:09:02.480 Attached to 0000:00:10.0 00:09:02.480 Attached to 0000:00:11.0 00:09:02.480 Attached to 0000:00:13.0 00:09:02.480 Attached to 0000:00:12.0 00:09:02.480 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:02.480 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:02.480 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:02.481 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:02.481 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:02.481 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:02.481 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:02.481 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:02.481 Initialization complete. Launching workers. 00:09:02.481 Starting thread on core 1 with urgent priority queue 00:09:02.481 Starting thread on core 2 with urgent priority queue 00:09:02.481 Starting thread on core 3 with urgent priority queue 00:09:02.481 Starting thread on core 0 with urgent priority queue 00:09:02.481 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:02.481 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:02.481 QEMU NVMe Ctrl (12341 ) core 1: 810.67 IO/s 123.36 secs/100000 ios 00:09:02.481 QEMU NVMe Ctrl (12342 ) core 1: 810.67 IO/s 123.36 secs/100000 ios 00:09:02.481 QEMU NVMe Ctrl (12343 ) core 2: 746.67 IO/s 133.93 secs/100000 ios 00:09:02.481 QEMU NVMe Ctrl (12342 ) core 3: 1024.00 IO/s 97.66 secs/100000 ios 00:09:02.481 ======================================================== 00:09:02.481 00:09:02.481 00:09:02.481 real 0m3.325s 00:09:02.481 user 0m9.252s 00:09:02.481 sys 0m0.134s 00:09:02.481 ************************************ 00:09:02.481 END TEST nvme_arbitration 00:09:02.481 ************************************ 00:09:02.481 14:41:40 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.481 14:41:40 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 14:41:40 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:02.481 14:41:40 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.481 14:41:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.481 14:41:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 ************************************ 00:09:02.481 START TEST nvme_single_aen 00:09:02.481 ************************************ 00:09:02.481 14:41:40 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:02.738 Asynchronous Event Request test 00:09:02.738 Attached to 0000:00:10.0 00:09:02.738 Attached to 0000:00:11.0 00:09:02.738 Attached to 0000:00:13.0 00:09:02.738 Attached to 0000:00:12.0 00:09:02.738 Reset controller to setup AER completions for this process 00:09:02.738 Registering asynchronous event callbacks... 00:09:02.738 Getting orig temperature thresholds of all controllers 00:09:02.738 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.738 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.738 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.738 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.738 Setting all controllers temperature threshold low to trigger AER 00:09:02.738 Waiting for all controllers temperature threshold to be set lower 00:09:02.738 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.738 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:02.738 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.738 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:02.738 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.738 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:02.738 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.738 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:02.738 Waiting for all controllers to trigger AER and reset threshold 00:09:02.738 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.738 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.738 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.738 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.738 Cleaning up... 00:09:02.738 ************************************ 00:09:02.738 END TEST nvme_single_aen 00:09:02.738 ************************************ 00:09:02.738 00:09:02.738 real 0m0.210s 00:09:02.738 user 0m0.077s 00:09:02.738 sys 0m0.096s 00:09:02.738 14:41:40 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.738 14:41:40 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:02.738 14:41:40 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:02.738 14:41:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.738 14:41:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.738 14:41:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.738 ************************************ 00:09:02.738 START TEST nvme_doorbell_aers 00:09:02.738 ************************************ 00:09:02.738 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:02.738 14:41:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:02.738 14:41:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:02.738 14:41:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:02.738 14:41:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:02.738 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:02.739 14:41:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:02.997 [2024-12-09 14:41:40.930385] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:12.993 Executing: test_write_invalid_db 00:09:12.993 Waiting for AER completion... 00:09:12.993 Failure: test_write_invalid_db 00:09:12.993 00:09:12.993 Executing: test_invalid_db_write_overflow_sq 00:09:12.993 Waiting for AER completion... 00:09:12.993 Failure: test_invalid_db_write_overflow_sq 00:09:12.993 00:09:12.993 Executing: test_invalid_db_write_overflow_cq 00:09:12.993 Waiting for AER completion... 00:09:12.993 Failure: test_invalid_db_write_overflow_cq 00:09:12.993 00:09:12.993 14:41:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:12.993 14:41:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:12.993 [2024-12-09 14:41:51.003336] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:22.976 Executing: test_write_invalid_db 00:09:22.976 Waiting for AER completion... 00:09:22.976 Failure: test_write_invalid_db 00:09:22.976 00:09:22.976 Executing: test_invalid_db_write_overflow_sq 00:09:22.976 Waiting for AER completion... 00:09:22.976 Failure: test_invalid_db_write_overflow_sq 00:09:22.976 00:09:22.976 Executing: test_invalid_db_write_overflow_cq 00:09:22.976 Waiting for AER completion... 00:09:22.976 Failure: test_invalid_db_write_overflow_cq 00:09:22.976 00:09:22.976 14:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:22.976 14:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:22.976 [2024-12-09 14:42:01.025373] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:32.943 Executing: test_write_invalid_db 00:09:32.943 Waiting for AER completion... 00:09:32.943 Failure: test_write_invalid_db 00:09:32.943 00:09:32.943 Executing: test_invalid_db_write_overflow_sq 00:09:32.943 Waiting for AER completion... 00:09:32.943 Failure: test_invalid_db_write_overflow_sq 00:09:32.943 00:09:32.943 Executing: test_invalid_db_write_overflow_cq 00:09:32.943 Waiting for AER completion... 00:09:32.943 Failure: test_invalid_db_write_overflow_cq 00:09:32.943 00:09:32.943 14:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:32.943 14:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:33.201 [2024-12-09 14:42:11.071375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 Executing: test_write_invalid_db 00:09:43.216 Waiting for AER completion... 00:09:43.216 Failure: test_write_invalid_db 00:09:43.216 00:09:43.216 Executing: test_invalid_db_write_overflow_sq 00:09:43.216 Waiting for AER completion... 00:09:43.216 Failure: test_invalid_db_write_overflow_sq 00:09:43.216 00:09:43.216 Executing: test_invalid_db_write_overflow_cq 00:09:43.216 Waiting for AER completion... 00:09:43.216 Failure: test_invalid_db_write_overflow_cq 00:09:43.216 00:09:43.216 00:09:43.216 real 0m40.218s 00:09:43.216 user 0m34.114s 00:09:43.216 sys 0m5.693s 00:09:43.216 14:42:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.216 ************************************ 00:09:43.216 END TEST nvme_doorbell_aers 00:09:43.216 14:42:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:43.216 ************************************ 00:09:43.216 14:42:20 nvme -- nvme/nvme.sh@97 -- # uname 00:09:43.216 14:42:20 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:43.216 14:42:20 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:43.216 14:42:20 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:43.216 14:42:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.216 14:42:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.216 ************************************ 00:09:43.216 START TEST nvme_multi_aen 00:09:43.216 ************************************ 00:09:43.216 14:42:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:43.216 [2024-12-09 14:42:21.114071] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.114239] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.114252] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.115671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.115701] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.115711] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.116874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.116895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.116903] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.118055] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.118108] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 [2024-12-09 14:42:21.118277] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64591) is not found. Dropping the request. 00:09:43.216 Child process pid: 65118 00:09:43.474 [Child] Asynchronous Event Request test 00:09:43.474 [Child] Attached to 0000:00:10.0 00:09:43.474 [Child] Attached to 0000:00:11.0 00:09:43.474 [Child] Attached to 0000:00:13.0 00:09:43.474 [Child] Attached to 0000:00:12.0 00:09:43.474 [Child] Registering asynchronous event callbacks... 00:09:43.474 [Child] Getting orig temperature thresholds of all controllers 00:09:43.474 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:43.474 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 [Child] Cleaning up... 00:09:43.474 Asynchronous Event Request test 00:09:43.474 Attached to 0000:00:10.0 00:09:43.474 Attached to 0000:00:11.0 00:09:43.474 Attached to 0000:00:13.0 00:09:43.474 Attached to 0000:00:12.0 00:09:43.474 Reset controller to setup AER completions for this process 00:09:43.474 Registering asynchronous event callbacks... 00:09:43.474 Getting orig temperature thresholds of all controllers 00:09:43.474 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.474 Setting all controllers temperature threshold low to trigger AER 00:09:43.474 Waiting for all controllers temperature threshold to be set lower 00:09:43.474 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:43.474 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:43.474 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:43.474 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.474 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:43.474 Waiting for all controllers to trigger AER and reset threshold 00:09:43.474 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.474 Cleaning up... 00:09:43.474 00:09:43.474 real 0m0.462s 00:09:43.474 user 0m0.135s 00:09:43.474 sys 0m0.207s 00:09:43.474 14:42:21 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.474 14:42:21 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 ************************************ 00:09:43.474 END TEST nvme_multi_aen 00:09:43.474 ************************************ 00:09:43.474 14:42:21 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:43.474 14:42:21 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.474 14:42:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.474 14:42:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 ************************************ 00:09:43.474 START TEST nvme_startup 00:09:43.474 ************************************ 00:09:43.474 14:42:21 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:43.732 Initializing NVMe Controllers 00:09:43.732 Attached to 0000:00:10.0 00:09:43.732 Attached to 0000:00:11.0 00:09:43.732 Attached to 0000:00:13.0 00:09:43.732 Attached to 0000:00:12.0 00:09:43.732 Initialization complete. 00:09:43.732 Time used:145366.031 (us). 00:09:43.732 00:09:43.732 real 0m0.207s 00:09:43.732 user 0m0.068s 00:09:43.732 sys 0m0.098s 00:09:43.732 14:42:21 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.732 14:42:21 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:43.732 ************************************ 00:09:43.732 END TEST nvme_startup 00:09:43.732 ************************************ 00:09:43.732 14:42:21 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:43.732 14:42:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.732 14:42:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.732 14:42:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.732 ************************************ 00:09:43.732 START TEST nvme_multi_secondary 00:09:43.732 ************************************ 00:09:43.732 14:42:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:43.732 14:42:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65169 00:09:43.732 14:42:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:43.732 14:42:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:43.732 14:42:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65170 00:09:43.732 14:42:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:47.012 Initializing NVMe Controllers 00:09:47.012 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.012 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.012 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.012 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.012 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:47.012 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:47.012 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:47.012 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:47.012 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:47.012 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:47.012 Initialization complete. Launching workers. 00:09:47.012 ======================================================== 00:09:47.012 Latency(us) 00:09:47.012 Device Information : IOPS MiB/s Average min max 00:09:47.012 PCIE (0000:00:10.0) NSID 1 from core 1: 7569.85 29.57 2112.27 899.62 6404.29 00:09:47.012 PCIE (0000:00:11.0) NSID 1 from core 1: 7569.85 29.57 2113.26 1009.65 7060.38 00:09:47.012 PCIE (0000:00:13.0) NSID 1 from core 1: 7569.85 29.57 2113.53 972.27 7245.57 00:09:47.012 PCIE (0000:00:12.0) NSID 1 from core 1: 7569.85 29.57 2113.87 921.32 7378.63 00:09:47.012 PCIE (0000:00:12.0) NSID 2 from core 1: 7569.85 29.57 2113.91 987.96 7514.44 00:09:47.012 PCIE (0000:00:12.0) NSID 3 from core 1: 7569.85 29.57 2114.36 943.84 7603.03 00:09:47.012 ======================================================== 00:09:47.012 Total : 45419.11 177.42 2113.53 899.62 7603.03 00:09:47.012 00:09:47.012 Initializing NVMe Controllers 00:09:47.012 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.012 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.012 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.012 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.012 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:47.012 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:47.012 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:47.012 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:47.012 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:47.012 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:47.012 Initialization complete. Launching workers. 00:09:47.012 ======================================================== 00:09:47.012 Latency(us) 00:09:47.012 Device Information : IOPS MiB/s Average min max 00:09:47.012 PCIE (0000:00:10.0) NSID 1 from core 2: 3068.92 11.99 5211.82 1018.11 17815.16 00:09:47.012 PCIE (0000:00:11.0) NSID 1 from core 2: 3068.92 11.99 5212.80 1118.68 16409.94 00:09:47.012 PCIE (0000:00:13.0) NSID 1 from core 2: 3068.92 11.99 5212.88 1049.11 16245.21 00:09:47.012 PCIE (0000:00:12.0) NSID 1 from core 2: 3068.92 11.99 5213.28 1066.31 15018.87 00:09:47.012 PCIE (0000:00:12.0) NSID 2 from core 2: 3068.92 11.99 5213.27 1066.87 15349.19 00:09:47.012 PCIE (0000:00:12.0) NSID 3 from core 2: 3068.92 11.99 5212.79 981.36 17896.53 00:09:47.012 ======================================================== 00:09:47.012 Total : 18413.54 71.93 5212.81 981.36 17896.53 00:09:47.012 00:09:47.271 14:42:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65169 00:09:49.168 Initializing NVMe Controllers 00:09:49.168 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.168 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.168 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.168 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.168 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:49.168 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:49.168 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:49.168 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:49.168 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:49.168 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:49.168 Initialization complete. Launching workers. 00:09:49.168 ======================================================== 00:09:49.168 Latency(us) 00:09:49.168 Device Information : IOPS MiB/s Average min max 00:09:49.168 PCIE (0000:00:10.0) NSID 1 from core 0: 10725.54 41.90 1490.54 691.61 5727.11 00:09:49.168 PCIE (0000:00:11.0) NSID 1 from core 0: 10725.54 41.90 1491.37 713.25 5696.01 00:09:49.168 PCIE (0000:00:13.0) NSID 1 from core 0: 10725.54 41.90 1491.35 649.58 5776.74 00:09:49.168 PCIE (0000:00:12.0) NSID 1 from core 0: 10725.54 41.90 1491.33 629.82 5764.55 00:09:49.168 PCIE (0000:00:12.0) NSID 2 from core 0: 10725.54 41.90 1491.31 602.19 6162.35 00:09:49.168 PCIE (0000:00:12.0) NSID 3 from core 0: 10725.34 41.90 1491.31 582.93 6398.36 00:09:49.168 ======================================================== 00:09:49.168 Total : 64353.06 251.38 1491.20 582.93 6398.36 00:09:49.168 00:09:49.168 14:42:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65170 00:09:49.168 14:42:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65239 00:09:49.168 14:42:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:49.168 14:42:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65240 00:09:49.168 14:42:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:49.168 14:42:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:52.458 Initializing NVMe Controllers 00:09:52.458 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:52.458 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:52.458 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:52.458 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:52.458 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:52.458 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:52.458 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:52.458 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:52.458 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:52.458 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:52.458 Initialization complete. Launching workers. 00:09:52.458 ======================================================== 00:09:52.458 Latency(us) 00:09:52.458 Device Information : IOPS MiB/s Average min max 00:09:52.458 PCIE (0000:00:10.0) NSID 1 from core 1: 5911.52 23.09 2705.11 702.17 12382.90 00:09:52.458 PCIE (0000:00:11.0) NSID 1 from core 1: 5911.52 23.09 2706.14 726.25 12607.94 00:09:52.458 PCIE (0000:00:13.0) NSID 1 from core 1: 5911.52 23.09 2706.07 722.07 12081.49 00:09:52.458 PCIE (0000:00:12.0) NSID 1 from core 1: 5911.52 23.09 2706.15 712.39 11464.16 00:09:52.458 PCIE (0000:00:12.0) NSID 2 from core 1: 5911.52 23.09 2706.09 710.63 11557.72 00:09:52.458 PCIE (0000:00:12.0) NSID 3 from core 1: 5911.52 23.09 2706.07 704.01 12170.35 00:09:52.458 ======================================================== 00:09:52.458 Total : 35469.09 138.55 2705.94 702.17 12607.94 00:09:52.458 00:09:52.458 Initializing NVMe Controllers 00:09:52.458 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:52.458 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:52.458 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:52.458 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:52.458 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:52.458 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:52.458 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:52.458 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:52.458 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:52.458 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:52.458 Initialization complete. Launching workers. 00:09:52.458 ======================================================== 00:09:52.458 Latency(us) 00:09:52.458 Device Information : IOPS MiB/s Average min max 00:09:52.458 PCIE (0000:00:10.0) NSID 1 from core 0: 5748.38 22.45 2781.86 727.11 11708.08 00:09:52.458 PCIE (0000:00:11.0) NSID 1 from core 0: 5748.38 22.45 2782.88 733.55 11472.86 00:09:52.458 PCIE (0000:00:13.0) NSID 1 from core 0: 5748.38 22.45 2782.83 733.37 10819.56 00:09:52.458 PCIE (0000:00:12.0) NSID 1 from core 0: 5748.38 22.45 2782.80 742.29 11821.91 00:09:52.458 PCIE (0000:00:12.0) NSID 2 from core 0: 5748.38 22.45 2782.77 741.37 10969.47 00:09:52.458 PCIE (0000:00:12.0) NSID 3 from core 0: 5748.38 22.45 2782.72 746.92 11461.44 00:09:52.458 ======================================================== 00:09:52.458 Total : 34490.30 134.73 2782.64 727.11 11821.91 00:09:52.458 00:09:54.399 Initializing NVMe Controllers 00:09:54.399 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:54.399 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:54.399 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:54.399 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:54.399 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:54.399 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:54.399 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:54.399 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:54.399 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:54.399 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:54.399 Initialization complete. Launching workers. 00:09:54.399 ======================================================== 00:09:54.399 Latency(us) 00:09:54.399 Device Information : IOPS MiB/s Average min max 00:09:54.399 PCIE (0000:00:10.0) NSID 1 from core 2: 3040.68 11.88 5260.43 797.49 24801.31 00:09:54.399 PCIE (0000:00:11.0) NSID 1 from core 2: 3040.68 11.88 5261.22 799.40 24573.25 00:09:54.399 PCIE (0000:00:13.0) NSID 1 from core 2: 3040.68 11.88 5261.39 800.39 24388.94 00:09:54.399 PCIE (0000:00:12.0) NSID 1 from core 2: 3040.68 11.88 5261.03 808.27 29535.70 00:09:54.399 PCIE (0000:00:12.0) NSID 2 from core 2: 3040.68 11.88 5261.22 807.13 27085.51 00:09:54.399 PCIE (0000:00:12.0) NSID 3 from core 2: 3040.68 11.88 5261.12 812.09 29933.56 00:09:54.399 ======================================================== 00:09:54.399 Total : 18244.07 71.27 5261.06 797.49 29933.56 00:09:54.399 00:09:54.399 ************************************ 00:09:54.399 END TEST nvme_multi_secondary 00:09:54.399 ************************************ 00:09:54.399 14:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65239 00:09:54.399 14:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65240 00:09:54.399 00:09:54.399 real 0m10.544s 00:09:54.399 user 0m18.364s 00:09:54.399 sys 0m0.713s 00:09:54.399 14:42:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.399 14:42:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:54.399 14:42:32 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:54.399 14:42:32 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64200 ]] 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@1094 -- # kill 64200 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@1095 -- # wait 64200 00:09:54.399 [2024-12-09 14:42:32.272431] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.272646] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.272679] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.272696] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.275608] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.275677] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.275697] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.275716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.278231] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.278410] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.278431] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.278451] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.280940] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.281059] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.281136] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.281299] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65117) is not found. Dropping the request. 00:09:54.399 [2024-12-09 14:42:32.392568] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:54.399 14:42:32 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.399 14:42:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.399 ************************************ 00:09:54.399 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:54.399 ************************************ 00:09:54.399 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:54.399 * Looking for test storage... 00:09:54.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:54.399 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.399 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.399 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.658 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.658 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.659 --rc genhtml_branch_coverage=1 00:09:54.659 --rc genhtml_function_coverage=1 00:09:54.659 --rc genhtml_legend=1 00:09:54.659 --rc geninfo_all_blocks=1 00:09:54.659 --rc geninfo_unexecuted_blocks=1 00:09:54.659 00:09:54.659 ' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.659 --rc genhtml_branch_coverage=1 00:09:54.659 --rc genhtml_function_coverage=1 00:09:54.659 --rc genhtml_legend=1 00:09:54.659 --rc geninfo_all_blocks=1 00:09:54.659 --rc geninfo_unexecuted_blocks=1 00:09:54.659 00:09:54.659 ' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.659 --rc genhtml_branch_coverage=1 00:09:54.659 --rc genhtml_function_coverage=1 00:09:54.659 --rc genhtml_legend=1 00:09:54.659 --rc geninfo_all_blocks=1 00:09:54.659 --rc geninfo_unexecuted_blocks=1 00:09:54.659 00:09:54.659 ' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.659 --rc genhtml_branch_coverage=1 00:09:54.659 --rc genhtml_function_coverage=1 00:09:54.659 --rc genhtml_legend=1 00:09:54.659 --rc geninfo_all_blocks=1 00:09:54.659 --rc geninfo_unexecuted_blocks=1 00:09:54.659 00:09:54.659 ' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:54.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65402 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65402 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65402 ']' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.659 14:42:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:54.659 [2024-12-09 14:42:32.730993] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:09:54.659 [2024-12-09 14:42:32.731251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65402 ] 00:09:54.920 [2024-12-09 14:42:32.903013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.920 [2024-12-09 14:42:33.005444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.920 [2024-12-09 14:42:33.005695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.920 [2024-12-09 14:42:33.005960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.920 [2024-12-09 14:42:33.005981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.487 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.487 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:55.487 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:55.487 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.487 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.745 nvme0n1 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_QbbvU.txt 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.745 true 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733755353 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65427 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:55.745 14:42:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.645 [2024-12-09 14:42:35.703119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:57.645 [2024-12-09 14:42:35.705086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:57.645 [2024-12-09 14:42:35.705139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:57.645 [2024-12-09 14:42:35.705156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:57.645 [2024-12-09 14:42:35.707134] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:57.645 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65427 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65427 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65427 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:57.645 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_QbbvU.txt 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_QbbvU.txt 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65402 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65402 ']' 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65402 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65402 00:09:57.904 killing process with pid 65402 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65402' 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65402 00:09:57.904 14:42:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65402 00:09:59.278 14:42:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:59.278 ************************************ 00:09:59.278 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:59.278 ************************************ 00:09:59.278 14:42:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:59.278 00:09:59.278 real 0m4.896s 00:09:59.278 user 0m17.284s 00:09:59.278 sys 0m0.535s 00:09:59.278 14:42:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.278 14:42:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:59.278 14:42:37 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:59.278 14:42:37 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:59.278 14:42:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.278 14:42:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.278 14:42:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.278 ************************************ 00:09:59.278 START TEST nvme_fio 00:09:59.278 ************************************ 00:09:59.278 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:59.278 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:59.278 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:59.278 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:59.278 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:59.278 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:59.278 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:59.278 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:59.278 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:59.537 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:59.537 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:59.537 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:59.537 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:59.537 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:59.537 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:59.537 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:59.537 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:59.537 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:59.795 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:59.795 14:42:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:59.795 14:42:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:00.053 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:00.053 fio-3.35 00:10:00.053 Starting 1 thread 00:10:06.657 00:10:06.657 test: (groupid=0, jobs=1): err= 0: pid=65567: Mon Dec 9 14:42:43 2024 00:10:06.657 read: IOPS=21.9k, BW=85.6MiB/s (89.7MB/s)(171MiB/2001msec) 00:10:06.657 slat (nsec): min=4195, max=67274, avg=5179.90, stdev=2180.16 00:10:06.657 clat (usec): min=255, max=7990, avg=2917.11, stdev=870.41 00:10:06.657 lat (usec): min=261, max=8001, avg=2922.29, stdev=871.48 00:10:06.657 clat percentiles (usec): 00:10:06.657 | 1.00th=[ 1762], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2409], 00:10:06.657 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2671], 00:10:06.657 | 70.00th=[ 2868], 80.00th=[ 3294], 90.00th=[ 4146], 95.00th=[ 4948], 00:10:06.657 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 6980], 99.95th=[ 7373], 00:10:06.658 | 99.99th=[ 7898] 00:10:06.658 bw ( KiB/s): min=86320, max=93904, per=100.00%, avg=89688.00, stdev=3862.46, samples=3 00:10:06.658 iops : min=21580, max=23476, avg=22422.00, stdev=965.61, samples=3 00:10:06.658 write: IOPS=21.8k, BW=85.0MiB/s (89.1MB/s)(170MiB/2001msec); 0 zone resets 00:10:06.658 slat (usec): min=4, max=173, avg= 5.41, stdev= 2.25 00:10:06.658 clat (usec): min=291, max=8165, avg=2923.01, stdev=871.39 00:10:06.658 lat (usec): min=297, max=8170, avg=2928.42, stdev=872.43 00:10:06.658 clat percentiles (usec): 00:10:06.658 | 1.00th=[ 1795], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2442], 00:10:06.658 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2671], 00:10:06.658 | 70.00th=[ 2868], 80.00th=[ 3294], 90.00th=[ 4146], 95.00th=[ 4948], 00:10:06.658 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7373], 00:10:06.658 | 99.99th=[ 7963] 00:10:06.658 bw ( KiB/s): min=85584, max=93720, per=100.00%, avg=89813.33, stdev=4077.59, samples=3 00:10:06.658 iops : min=21396, max=23430, avg=22453.33, stdev=1019.40, samples=3 00:10:06.658 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:10:06.658 lat (msec) : 2=2.16%, 4=86.93%, 10=10.88% 00:10:06.658 cpu : usr=99.15%, sys=0.10%, ctx=3, majf=0, minf=608 00:10:06.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:06.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.658 issued rwts: total=43828,43532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.658 00:10:06.658 Run status group 0 (all jobs): 00:10:06.658 READ: bw=85.6MiB/s (89.7MB/s), 85.6MiB/s-85.6MiB/s (89.7MB/s-89.7MB/s), io=171MiB (180MB), run=2001-2001msec 00:10:06.658 WRITE: bw=85.0MiB/s (89.1MB/s), 85.0MiB/s-85.0MiB/s (89.1MB/s-89.1MB/s), io=170MiB (178MB), run=2001-2001msec 00:10:06.658 ----------------------------------------------------- 00:10:06.658 Suppressions used: 00:10:06.658 count bytes template 00:10:06.658 1 32 /usr/src/fio/parse.c 00:10:06.658 1 8 libtcmalloc_minimal.so 00:10:06.658 ----------------------------------------------------- 00:10:06.658 00:10:06.658 14:42:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:06.658 14:42:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:06.658 14:42:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:06.658 14:42:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:06.658 14:42:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:06.658 14:42:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:06.658 14:42:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:06.658 14:42:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:06.658 14:42:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.658 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:06.658 fio-3.35 00:10:06.658 Starting 1 thread 00:10:13.253 00:10:13.253 test: (groupid=0, jobs=1): err= 0: pid=65634: Mon Dec 9 14:42:50 2024 00:10:13.253 read: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(185MiB/2001msec) 00:10:13.253 slat (nsec): min=3349, max=64414, avg=5041.90, stdev=2056.93 00:10:13.254 clat (usec): min=246, max=8998, avg=2711.46, stdev=774.23 00:10:13.254 lat (usec): min=251, max=9010, avg=2716.50, stdev=775.50 00:10:13.254 clat percentiles (usec): 00:10:13.254 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:10:13.254 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:13.254 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 3392], 95.00th=[ 4490], 00:10:13.254 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 8356], 99.95th=[ 8586], 00:10:13.254 | 99.99th=[ 8848] 00:10:13.254 bw ( KiB/s): min=94600, max=95760, per=100.00%, avg=95274.67, stdev=602.73, samples=3 00:10:13.254 iops : min=23650, max=23940, avg=23818.67, stdev=150.68, samples=3 00:10:13.254 write: IOPS=23.4k, BW=91.6MiB/s (96.0MB/s)(183MiB/2001msec); 0 zone resets 00:10:13.254 slat (nsec): min=3522, max=64573, avg=5324.44, stdev=2123.13 00:10:13.254 clat (usec): min=231, max=9066, avg=2711.81, stdev=772.15 00:10:13.254 lat (usec): min=236, max=9078, avg=2717.14, stdev=773.42 00:10:13.254 clat percentiles (usec): 00:10:13.254 | 1.00th=[ 1762], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2409], 00:10:13.254 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:13.254 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 3359], 95.00th=[ 4490], 00:10:13.254 | 99.00th=[ 6063], 99.50th=[ 6652], 99.90th=[ 8160], 99.95th=[ 8455], 00:10:13.254 | 99.99th=[ 8848] 00:10:13.254 bw ( KiB/s): min=94848, max=95576, per=100.00%, avg=95280.00, stdev=382.58, samples=3 00:10:13.254 iops : min=23712, max=23894, avg=23820.00, stdev=95.65, samples=3 00:10:13.254 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:10:13.254 lat (msec) : 2=2.25%, 4=90.44%, 10=7.25% 00:10:13.254 cpu : usr=99.20%, sys=0.10%, ctx=4, majf=0, minf=608 00:10:13.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:13.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.254 issued rwts: total=47233,46918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.254 00:10:13.254 Run status group 0 (all jobs): 00:10:13.254 READ: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=185MiB (193MB), run=2001-2001msec 00:10:13.254 WRITE: bw=91.6MiB/s (96.0MB/s), 91.6MiB/s-91.6MiB/s (96.0MB/s-96.0MB/s), io=183MiB (192MB), run=2001-2001msec 00:10:13.254 ----------------------------------------------------- 00:10:13.254 Suppressions used: 00:10:13.254 count bytes template 00:10:13.254 1 32 /usr/src/fio/parse.c 00:10:13.254 1 8 libtcmalloc_minimal.so 00:10:13.254 ----------------------------------------------------- 00:10:13.254 00:10:13.254 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:13.254 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:13.254 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:13.254 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:13.254 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:13.254 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:13.513 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:13.513 14:42:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:13.513 14:42:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:13.772 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:13.772 fio-3.35 00:10:13.772 Starting 1 thread 00:10:18.040 00:10:18.040 test: (groupid=0, jobs=1): err= 0: pid=65699: Mon Dec 9 14:42:56 2024 00:10:18.040 read: IOPS=16.7k, BW=65.3MiB/s (68.4MB/s)(132MiB/2017msec) 00:10:18.040 slat (usec): min=3, max=102, avg= 4.95, stdev= 2.35 00:10:18.040 clat (usec): min=855, max=19005, avg=2983.27, stdev=1242.28 00:10:18.040 lat (usec): min=858, max=19009, avg=2988.22, stdev=1242.97 00:10:18.040 clat percentiles (usec): 00:10:18.040 | 1.00th=[ 1516], 5.00th=[ 2024], 10.00th=[ 2278], 20.00th=[ 2376], 00:10:18.040 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:18.040 | 70.00th=[ 2737], 80.00th=[ 3326], 90.00th=[ 4686], 95.00th=[ 5735], 00:10:18.040 | 99.00th=[ 7570], 99.50th=[ 8455], 99.90th=[11076], 99.95th=[17957], 00:10:18.040 | 99.99th=[18744] 00:10:18.040 bw ( KiB/s): min=29872, max=96544, per=100.00%, avg=67354.00, stdev=32955.93, samples=4 00:10:18.040 iops : min= 7468, max=24136, avg=16838.50, stdev=8238.98, samples=4 00:10:18.040 write: IOPS=16.7k, BW=65.4MiB/s (68.6MB/s)(132MiB/2017msec); 0 zone resets 00:10:18.040 slat (nsec): min=3466, max=51812, avg=5248.44, stdev=2269.01 00:10:18.040 clat (usec): min=894, max=37888, avg=4639.86, stdev=5994.44 00:10:18.040 lat (usec): min=897, max=37892, avg=4645.11, stdev=5994.60 00:10:18.040 clat percentiles (usec): 00:10:18.040 | 1.00th=[ 1598], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2409], 00:10:18.040 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:10:18.040 | 70.00th=[ 2835], 80.00th=[ 3982], 90.00th=[ 6456], 95.00th=[21627], 00:10:18.040 | 99.00th=[30278], 99.50th=[32113], 99.90th=[35914], 99.95th=[36963], 00:10:18.040 | 99.99th=[37487] 00:10:18.040 bw ( KiB/s): min=30248, max=95864, per=100.00%, avg=67338.00, stdev=32459.82, samples=4 00:10:18.040 iops : min= 7562, max=23966, avg=16834.50, stdev=8114.96, samples=4 00:10:18.040 lat (usec) : 1000=0.05% 00:10:18.040 lat (msec) : 2=3.96%, 4=78.81%, 10=12.86%, 20=1.20%, 50=3.12% 00:10:18.040 cpu : usr=99.31%, sys=0.00%, ctx=4, majf=0, minf=608 00:10:18.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.040 issued rwts: total=33701,33772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.040 00:10:18.040 Run status group 0 (all jobs): 00:10:18.040 READ: bw=65.3MiB/s (68.4MB/s), 65.3MiB/s-65.3MiB/s (68.4MB/s-68.4MB/s), io=132MiB (138MB), run=2017-2017msec 00:10:18.040 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=132MiB (138MB), run=2017-2017msec 00:10:18.298 ----------------------------------------------------- 00:10:18.298 Suppressions used: 00:10:18.298 count bytes template 00:10:18.298 1 32 /usr/src/fio/parse.c 00:10:18.298 1 8 libtcmalloc_minimal.so 00:10:18.298 ----------------------------------------------------- 00:10:18.299 00:10:18.299 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.299 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:18.299 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.299 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:18.557 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.557 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:18.815 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:18.815 14:42:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:18.815 14:42:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.815 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:18.815 fio-3.35 00:10:18.815 Starting 1 thread 00:10:28.791 00:10:28.791 test: (groupid=0, jobs=1): err= 0: pid=65756: Mon Dec 9 14:43:05 2024 00:10:28.791 read: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(173MiB/2001msec) 00:10:28.791 slat (nsec): min=3355, max=64131, avg=5384.11, stdev=2247.61 00:10:28.791 clat (usec): min=207, max=10650, avg=2891.06, stdev=846.19 00:10:28.791 lat (usec): min=212, max=10714, avg=2896.44, stdev=847.48 00:10:28.791 clat percentiles (usec): 00:10:28.791 | 1.00th=[ 1713], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2474], 00:10:28.791 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2704], 00:10:28.791 | 70.00th=[ 2802], 80.00th=[ 3097], 90.00th=[ 3752], 95.00th=[ 4817], 00:10:28.791 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 7504], 99.95th=[ 9241], 00:10:28.791 | 99.99th=[10552] 00:10:28.791 bw ( KiB/s): min=80080, max=96920, per=100.00%, avg=90704.00, stdev=9244.96, samples=3 00:10:28.791 iops : min=20020, max=24230, avg=22676.00, stdev=2311.24, samples=3 00:10:28.791 write: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(172MiB/2001msec); 0 zone resets 00:10:28.791 slat (nsec): min=3457, max=53265, avg=5765.78, stdev=2263.81 00:10:28.791 clat (usec): min=333, max=10574, avg=2890.51, stdev=840.56 00:10:28.791 lat (usec): min=339, max=10595, avg=2896.28, stdev=841.86 00:10:28.791 clat percentiles (usec): 00:10:28.791 | 1.00th=[ 1696], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2474], 00:10:28.791 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2704], 00:10:28.791 | 70.00th=[ 2802], 80.00th=[ 3097], 90.00th=[ 3752], 95.00th=[ 4817], 00:10:28.791 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 8160], 99.95th=[ 9503], 00:10:28.791 | 99.99th=[10290] 00:10:28.792 bw ( KiB/s): min=81536, max=96624, per=100.00%, avg=90904.00, stdev=8178.81, samples=3 00:10:28.792 iops : min=20384, max=24156, avg=22726.00, stdev=2044.70, samples=3 00:10:28.792 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.06% 00:10:28.792 lat (msec) : 2=1.92%, 4=89.89%, 10=8.08%, 20=0.02% 00:10:28.792 cpu : usr=99.20%, sys=0.10%, ctx=4, majf=0, minf=607 00:10:28.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:28.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.792 issued rwts: total=44263,43967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.792 00:10:28.792 Run status group 0 (all jobs): 00:10:28.792 READ: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=173MiB (181MB), run=2001-2001msec 00:10:28.792 WRITE: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:28.792 ----------------------------------------------------- 00:10:28.792 Suppressions used: 00:10:28.792 count bytes template 00:10:28.792 1 32 /usr/src/fio/parse.c 00:10:28.792 1 8 libtcmalloc_minimal.so 00:10:28.792 ----------------------------------------------------- 00:10:28.792 00:10:28.792 ************************************ 00:10:28.792 END TEST nvme_fio 00:10:28.792 ************************************ 00:10:28.792 14:43:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:28.792 14:43:05 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:28.792 00:10:28.792 real 0m28.085s 00:10:28.792 user 0m20.476s 00:10:28.792 sys 0m12.334s 00:10:28.792 14:43:05 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.792 14:43:05 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:28.792 ************************************ 00:10:28.792 END TEST nvme 00:10:28.792 ************************************ 00:10:28.792 00:10:28.792 real 1m37.334s 00:10:28.792 user 3m41.436s 00:10:28.792 sys 0m23.020s 00:10:28.792 14:43:05 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.792 14:43:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.792 14:43:05 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:28.792 14:43:05 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:28.792 14:43:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.792 14:43:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.792 14:43:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.792 ************************************ 00:10:28.792 START TEST nvme_scc 00:10:28.792 ************************************ 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:28.792 * Looking for test storage... 00:10:28.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.792 --rc genhtml_branch_coverage=1 00:10:28.792 --rc genhtml_function_coverage=1 00:10:28.792 --rc genhtml_legend=1 00:10:28.792 --rc geninfo_all_blocks=1 00:10:28.792 --rc geninfo_unexecuted_blocks=1 00:10:28.792 00:10:28.792 ' 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.792 --rc genhtml_branch_coverage=1 00:10:28.792 --rc genhtml_function_coverage=1 00:10:28.792 --rc genhtml_legend=1 00:10:28.792 --rc geninfo_all_blocks=1 00:10:28.792 --rc geninfo_unexecuted_blocks=1 00:10:28.792 00:10:28.792 ' 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.792 --rc genhtml_branch_coverage=1 00:10:28.792 --rc genhtml_function_coverage=1 00:10:28.792 --rc genhtml_legend=1 00:10:28.792 --rc geninfo_all_blocks=1 00:10:28.792 --rc geninfo_unexecuted_blocks=1 00:10:28.792 00:10:28.792 ' 00:10:28.792 14:43:05 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.792 --rc genhtml_branch_coverage=1 00:10:28.792 --rc genhtml_function_coverage=1 00:10:28.792 --rc genhtml_legend=1 00:10:28.792 --rc geninfo_all_blocks=1 00:10:28.792 --rc geninfo_unexecuted_blocks=1 00:10:28.792 00:10:28.792 ' 00:10:28.792 14:43:05 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.792 14:43:05 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.792 14:43:05 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.792 14:43:05 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.792 14:43:05 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.792 14:43:05 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:28.792 14:43:05 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:28.792 14:43:05 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:28.792 14:43:05 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.792 14:43:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:28.792 14:43:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:28.792 14:43:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:28.792 14:43:05 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:28.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:28.793 Waiting for block devices as requested 00:10:28.793 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.793 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.793 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.793 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.075 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:34.075 14:43:11 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:34.075 14:43:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:34.075 14:43:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:34.075 14:43:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.075 14:43:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.075 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:34.076 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:34.077 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:34.078 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:34.079 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.080 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:34.081 14:43:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:34.081 14:43:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:34.081 14:43:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.081 14:43:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:34.081 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.082 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.083 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.084 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.085 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.086 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:34.087 14:43:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:34.087 14:43:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:34.087 14:43:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.087 14:43:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.087 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.088 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.089 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.090 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:34.091 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.092 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.093 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.094 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:34.095 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.096 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.097 14:43:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.098 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:34.099 14:43:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:34.099 14:43:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:34.099 14:43:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:34.099 14:43:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.099 14:43:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.100 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:34.101 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:34.102 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:34.103 14:43:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:34.103 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:34.104 14:43:11 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:34.104 14:43:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:34.104 14:43:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:34.104 14:43:11 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:34.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.621 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.621 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.621 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.881 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.881 14:43:12 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:34.881 14:43:12 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.881 14:43:12 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.881 14:43:12 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:34.881 ************************************ 00:10:34.881 START TEST nvme_simple_copy 00:10:34.881 ************************************ 00:10:34.881 14:43:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:35.141 Initializing NVMe Controllers 00:10:35.141 Attaching to 0000:00:10.0 00:10:35.141 Controller supports SCC. Attached to 0000:00:10.0 00:10:35.141 Namespace ID: 1 size: 6GB 00:10:35.141 Initialization complete. 00:10:35.141 00:10:35.141 Controller QEMU NVMe Ctrl (12340 ) 00:10:35.141 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:35.141 Namespace Block Size:4096 00:10:35.141 Writing LBAs 0 to 63 with Random Data 00:10:35.141 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:35.141 LBAs matching Written Data: 64 00:10:35.141 00:10:35.141 real 0m0.255s 00:10:35.141 user 0m0.094s 00:10:35.141 sys 0m0.059s 00:10:35.141 14:43:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.141 14:43:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:35.141 ************************************ 00:10:35.141 END TEST nvme_simple_copy 00:10:35.141 ************************************ 00:10:35.141 00:10:35.141 real 0m7.591s 00:10:35.141 user 0m1.050s 00:10:35.141 sys 0m1.286s 00:10:35.141 14:43:13 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.141 ************************************ 00:10:35.141 END TEST nvme_scc 00:10:35.141 ************************************ 00:10:35.141 14:43:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:35.141 14:43:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:35.141 14:43:13 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:35.141 14:43:13 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:35.141 14:43:13 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:35.141 14:43:13 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:35.141 14:43:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.141 14:43:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.141 14:43:13 -- common/autotest_common.sh@10 -- # set +x 00:10:35.141 ************************************ 00:10:35.141 START TEST nvme_fdp 00:10:35.141 ************************************ 00:10:35.141 14:43:13 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:35.141 * Looking for test storage... 00:10:35.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:35.141 14:43:13 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.141 14:43:13 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.141 14:43:13 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.401 14:43:13 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:35.401 14:43:13 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.401 14:43:13 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.401 --rc genhtml_branch_coverage=1 00:10:35.401 --rc genhtml_function_coverage=1 00:10:35.401 --rc genhtml_legend=1 00:10:35.401 --rc geninfo_all_blocks=1 00:10:35.401 --rc geninfo_unexecuted_blocks=1 00:10:35.401 00:10:35.401 ' 00:10:35.401 14:43:13 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.401 --rc genhtml_branch_coverage=1 00:10:35.401 --rc genhtml_function_coverage=1 00:10:35.401 --rc genhtml_legend=1 00:10:35.401 --rc geninfo_all_blocks=1 00:10:35.401 --rc geninfo_unexecuted_blocks=1 00:10:35.401 00:10:35.401 ' 00:10:35.401 14:43:13 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.401 --rc genhtml_branch_coverage=1 00:10:35.401 --rc genhtml_function_coverage=1 00:10:35.401 --rc genhtml_legend=1 00:10:35.401 --rc geninfo_all_blocks=1 00:10:35.401 --rc geninfo_unexecuted_blocks=1 00:10:35.401 00:10:35.401 ' 00:10:35.401 14:43:13 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.401 --rc genhtml_branch_coverage=1 00:10:35.401 --rc genhtml_function_coverage=1 00:10:35.401 --rc genhtml_legend=1 00:10:35.401 --rc geninfo_all_blocks=1 00:10:35.401 --rc geninfo_unexecuted_blocks=1 00:10:35.401 00:10:35.401 ' 00:10:35.401 14:43:13 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.401 14:43:13 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.401 14:43:13 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.401 14:43:13 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.401 14:43:13 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.401 14:43:13 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:35.401 14:43:13 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:35.401 14:43:13 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:35.401 14:43:13 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:35.401 14:43:13 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:35.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.922 Waiting for block devices as requested 00:10:35.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.922 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.922 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:36.181 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:41.480 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:41.480 14:43:19 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:41.480 14:43:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.480 14:43:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:41.480 14:43:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.480 14:43:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.480 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.481 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.482 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.483 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:41.484 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:41.485 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:41.486 14:43:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.486 14:43:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:41.486 14:43:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.486 14:43:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:41.486 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.487 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.488 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:41.489 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.490 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.491 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:41.492 14:43:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.492 14:43:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:41.492 14:43:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.492 14:43:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.492 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.493 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.494 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.495 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:41.496 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.497 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.498 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.499 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.500 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:41.501 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.502 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.503 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:41.504 14:43:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:41.504 14:43:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:41.504 14:43:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:41.504 14:43:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:41.504 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.505 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.506 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:41.507 14:43:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:41.507 14:43:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:41.507 14:43:19 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:41.507 14:43:19 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:42.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.332 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.590 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.590 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.590 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.590 14:43:20 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:42.590 14:43:20 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.590 14:43:20 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.590 14:43:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:42.590 ************************************ 00:10:42.590 START TEST nvme_flexible_data_placement 00:10:42.590 ************************************ 00:10:42.590 14:43:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:42.848 Initializing NVMe Controllers 00:10:42.848 Attaching to 0000:00:13.0 00:10:42.848 Controller supports FDP Attached to 0000:00:13.0 00:10:42.848 Namespace ID: 1 Endurance Group ID: 1 00:10:42.848 Initialization complete. 00:10:42.848 00:10:42.848 ================================== 00:10:42.848 == FDP tests for Namespace: #01 == 00:10:42.848 ================================== 00:10:42.848 00:10:42.848 Get Feature: FDP: 00:10:42.848 ================= 00:10:42.848 Enabled: Yes 00:10:42.848 FDP configuration Index: 0 00:10:42.848 00:10:42.848 FDP configurations log page 00:10:42.848 =========================== 00:10:42.848 Number of FDP configurations: 1 00:10:42.848 Version: 0 00:10:42.848 Size: 112 00:10:42.848 FDP Configuration Descriptor: 0 00:10:42.848 Descriptor Size: 96 00:10:42.848 Reclaim Group Identifier format: 2 00:10:42.848 FDP Volatile Write Cache: Not Present 00:10:42.848 FDP Configuration: Valid 00:10:42.848 Vendor Specific Size: 0 00:10:42.848 Number of Reclaim Groups: 2 00:10:42.848 Number of Recalim Unit Handles: 8 00:10:42.848 Max Placement Identifiers: 128 00:10:42.848 Number of Namespaces Suppprted: 256 00:10:42.848 Reclaim unit Nominal Size: 6000000 bytes 00:10:42.848 Estimated Reclaim Unit Time Limit: Not Reported 00:10:42.848 RUH Desc #000: RUH Type: Initially Isolated 00:10:42.848 RUH Desc #001: RUH Type: Initially Isolated 00:10:42.848 RUH Desc #002: RUH Type: Initially Isolated 00:10:42.848 RUH Desc #003: RUH Type: Initially Isolated 00:10:42.848 RUH Desc #004: RUH Type: Initially Isolated 00:10:42.848 RUH Desc #005: RUH Type: Initially Isolated 00:10:42.848 RUH Desc #006: RUH Type: Initially Isolated 00:10:42.848 RUH Desc #007: RUH Type: Initially Isolated 00:10:42.848 00:10:42.848 FDP reclaim unit handle usage log page 00:10:42.848 ====================================== 00:10:42.848 Number of Reclaim Unit Handles: 8 00:10:42.848 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:42.848 RUH Usage Desc #001: RUH Attributes: Unused 00:10:42.848 RUH Usage Desc #002: RUH Attributes: Unused 00:10:42.848 RUH Usage Desc #003: RUH Attributes: Unused 00:10:42.848 RUH Usage Desc #004: RUH Attributes: Unused 00:10:42.848 RUH Usage Desc #005: RUH Attributes: Unused 00:10:42.848 RUH Usage Desc #006: RUH Attributes: Unused 00:10:42.848 RUH Usage Desc #007: RUH Attributes: Unused 00:10:42.848 00:10:42.848 FDP statistics log page 00:10:42.848 ======================= 00:10:42.848 Host bytes with metadata written: 1198628864 00:10:42.848 Media bytes with metadata written: 1198772224 00:10:42.848 Media bytes erased: 0 00:10:42.848 00:10:42.848 FDP Reclaim unit handle status 00:10:42.848 ============================== 00:10:42.848 Number of RUHS descriptors: 2 00:10:42.848 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000008e6 00:10:42.848 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:42.848 00:10:42.848 FDP write on placement id: 0 success 00:10:42.848 00:10:42.848 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:42.848 00:10:42.848 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:42.848 00:10:42.848 Get Feature: FDP Events for Placement handle: #0 00:10:42.848 ======================== 00:10:42.848 Number of FDP Events: 6 00:10:42.848 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:42.848 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:42.848 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:42.848 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:42.848 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:42.848 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:42.848 00:10:42.848 FDP events log page 00:10:42.848 =================== 00:10:42.848 Number of FDP events: 1 00:10:42.848 FDP Event #0: 00:10:42.848 Event Type: RU Not Written to Capacity 00:10:42.848 Placement Identifier: Valid 00:10:42.848 NSID: Valid 00:10:42.848 Location: Valid 00:10:42.848 Placement Identifier: 0 00:10:42.848 Event Timestamp: 6 00:10:42.848 Namespace Identifier: 1 00:10:42.848 Reclaim Group Identifier: 0 00:10:42.848 Reclaim Unit Handle Identifier: 0 00:10:42.848 00:10:42.848 FDP test passed 00:10:42.848 00:10:42.848 real 0m0.232s 00:10:42.848 user 0m0.071s 00:10:42.848 sys 0m0.060s 00:10:42.848 14:43:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.848 14:43:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:42.848 ************************************ 00:10:42.848 END TEST nvme_flexible_data_placement 00:10:42.848 ************************************ 00:10:42.848 00:10:42.848 real 0m7.704s 00:10:42.849 user 0m1.063s 00:10:42.849 sys 0m1.513s 00:10:42.849 14:43:20 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.849 14:43:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:42.849 ************************************ 00:10:42.849 END TEST nvme_fdp 00:10:42.849 ************************************ 00:10:42.849 14:43:20 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:42.849 14:43:20 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:42.849 14:43:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.849 14:43:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.849 14:43:20 -- common/autotest_common.sh@10 -- # set +x 00:10:42.849 ************************************ 00:10:42.849 START TEST nvme_rpc 00:10:42.849 ************************************ 00:10:42.849 14:43:20 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:42.849 * Looking for test storage... 00:10:42.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.849 14:43:20 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:43.108 14:43:20 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:43.108 14:43:20 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.108 14:43:21 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:43.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.108 --rc genhtml_branch_coverage=1 00:10:43.108 --rc genhtml_function_coverage=1 00:10:43.108 --rc genhtml_legend=1 00:10:43.108 --rc geninfo_all_blocks=1 00:10:43.108 --rc geninfo_unexecuted_blocks=1 00:10:43.108 00:10:43.108 ' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:43.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.108 --rc genhtml_branch_coverage=1 00:10:43.108 --rc genhtml_function_coverage=1 00:10:43.108 --rc genhtml_legend=1 00:10:43.108 --rc geninfo_all_blocks=1 00:10:43.108 --rc geninfo_unexecuted_blocks=1 00:10:43.108 00:10:43.108 ' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:43.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.108 --rc genhtml_branch_coverage=1 00:10:43.108 --rc genhtml_function_coverage=1 00:10:43.108 --rc genhtml_legend=1 00:10:43.108 --rc geninfo_all_blocks=1 00:10:43.108 --rc geninfo_unexecuted_blocks=1 00:10:43.108 00:10:43.108 ' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:43.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.108 --rc genhtml_branch_coverage=1 00:10:43.108 --rc genhtml_function_coverage=1 00:10:43.108 --rc genhtml_legend=1 00:10:43.108 --rc geninfo_all_blocks=1 00:10:43.108 --rc geninfo_unexecuted_blocks=1 00:10:43.108 00:10:43.108 ' 00:10:43.108 14:43:21 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.108 14:43:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:43.108 14:43:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:43.108 14:43:21 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67135 00:10:43.108 14:43:21 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:43.108 14:43:21 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67135 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67135 ']' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.108 14:43:21 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.108 14:43:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.108 [2024-12-09 14:43:21.190522] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:43.108 [2024-12-09 14:43:21.190633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67135 ] 00:10:43.367 [2024-12-09 14:43:21.348872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.367 [2024-12-09 14:43:21.445311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.367 [2024-12-09 14:43:21.445384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.934 14:43:22 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.934 14:43:22 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:43.934 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:44.192 Nvme0n1 00:10:44.193 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:44.193 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:44.450 request: 00:10:44.450 { 00:10:44.450 "bdev_name": "Nvme0n1", 00:10:44.450 "filename": "non_existing_file", 00:10:44.450 "method": "bdev_nvme_apply_firmware", 00:10:44.450 "req_id": 1 00:10:44.450 } 00:10:44.450 Got JSON-RPC error response 00:10:44.450 response: 00:10:44.450 { 00:10:44.450 "code": -32603, 00:10:44.450 "message": "open file failed." 00:10:44.450 } 00:10:44.450 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:44.451 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:44.451 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:44.709 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:44.709 14:43:22 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67135 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67135 ']' 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67135 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67135 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.709 killing process with pid 67135 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67135' 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67135 00:10:44.709 14:43:22 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67135 00:10:46.149 00:10:46.149 real 0m3.239s 00:10:46.149 user 0m6.121s 00:10:46.149 sys 0m0.503s 00:10:46.149 14:43:24 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.149 14:43:24 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.149 ************************************ 00:10:46.149 END TEST nvme_rpc 00:10:46.149 ************************************ 00:10:46.149 14:43:24 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:46.149 14:43:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.149 14:43:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.149 14:43:24 -- common/autotest_common.sh@10 -- # set +x 00:10:46.149 ************************************ 00:10:46.149 START TEST nvme_rpc_timeouts 00:10:46.149 ************************************ 00:10:46.149 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:46.149 * Looking for test storage... 00:10:46.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.149 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:46.149 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:10:46.149 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.407 14:43:24 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.407 --rc genhtml_branch_coverage=1 00:10:46.407 --rc genhtml_function_coverage=1 00:10:46.407 --rc genhtml_legend=1 00:10:46.407 --rc geninfo_all_blocks=1 00:10:46.407 --rc geninfo_unexecuted_blocks=1 00:10:46.407 00:10:46.407 ' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.407 --rc genhtml_branch_coverage=1 00:10:46.407 --rc genhtml_function_coverage=1 00:10:46.407 --rc genhtml_legend=1 00:10:46.407 --rc geninfo_all_blocks=1 00:10:46.407 --rc geninfo_unexecuted_blocks=1 00:10:46.407 00:10:46.407 ' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.407 --rc genhtml_branch_coverage=1 00:10:46.407 --rc genhtml_function_coverage=1 00:10:46.407 --rc genhtml_legend=1 00:10:46.407 --rc geninfo_all_blocks=1 00:10:46.407 --rc geninfo_unexecuted_blocks=1 00:10:46.407 00:10:46.407 ' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.407 --rc genhtml_branch_coverage=1 00:10:46.407 --rc genhtml_function_coverage=1 00:10:46.407 --rc genhtml_legend=1 00:10:46.407 --rc geninfo_all_blocks=1 00:10:46.407 --rc geninfo_unexecuted_blocks=1 00:10:46.407 00:10:46.407 ' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.407 14:43:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67199 00:10:46.407 14:43:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67199 00:10:46.407 14:43:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67235 00:10:46.407 14:43:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:46.407 14:43:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:46.407 14:43:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67235 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67235 ']' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.407 14:43:24 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:46.407 [2024-12-09 14:43:24.401693] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:10:46.407 [2024-12-09 14:43:24.401821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67235 ] 00:10:46.665 [2024-12-09 14:43:24.560856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:46.665 [2024-12-09 14:43:24.660928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.665 [2024-12-09 14:43:24.661011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.229 Checking default timeout settings: 00:10:47.229 14:43:25 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.229 14:43:25 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:47.229 14:43:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:47.229 14:43:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:47.486 Making settings changes with rpc: 00:10:47.486 14:43:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:47.486 14:43:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:47.744 Check default vs. modified settings: 00:10:47.744 14:43:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:47.744 14:43:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67199 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67199 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:48.002 Setting action_on_timeout is changed as expected. 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67199 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67199 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:48.002 Setting timeout_us is changed as expected. 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67199 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67199 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:48.002 Setting timeout_admin_us is changed as expected. 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67199 /tmp/settings_modified_67199 00:10:48.002 14:43:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67235 00:10:48.002 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67235 ']' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67235 00:10:48.002 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:48.002 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.002 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67235 00:10:48.259 killing process with pid 67235 00:10:48.260 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.260 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.260 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67235' 00:10:48.260 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67235 00:10:48.260 14:43:26 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67235 00:10:49.633 RPC TIMEOUT SETTING TEST PASSED. 00:10:49.633 14:43:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:49.633 ************************************ 00:10:49.633 END TEST nvme_rpc_timeouts 00:10:49.633 ************************************ 00:10:49.633 00:10:49.633 real 0m3.269s 00:10:49.633 user 0m6.361s 00:10:49.633 sys 0m0.478s 00:10:49.633 14:43:27 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.633 14:43:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:49.633 14:43:27 -- spdk/autotest.sh@239 -- # uname -s 00:10:49.633 14:43:27 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:49.633 14:43:27 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:49.633 14:43:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.633 14:43:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.633 14:43:27 -- common/autotest_common.sh@10 -- # set +x 00:10:49.633 ************************************ 00:10:49.633 START TEST sw_hotplug 00:10:49.633 ************************************ 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:49.633 * Looking for test storage... 00:10:49.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.633 14:43:27 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.633 --rc genhtml_branch_coverage=1 00:10:49.633 --rc genhtml_function_coverage=1 00:10:49.633 --rc genhtml_legend=1 00:10:49.633 --rc geninfo_all_blocks=1 00:10:49.633 --rc geninfo_unexecuted_blocks=1 00:10:49.633 00:10:49.633 ' 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.633 --rc genhtml_branch_coverage=1 00:10:49.633 --rc genhtml_function_coverage=1 00:10:49.633 --rc genhtml_legend=1 00:10:49.633 --rc geninfo_all_blocks=1 00:10:49.633 --rc geninfo_unexecuted_blocks=1 00:10:49.633 00:10:49.633 ' 00:10:49.633 14:43:27 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.633 --rc genhtml_branch_coverage=1 00:10:49.633 --rc genhtml_function_coverage=1 00:10:49.634 --rc genhtml_legend=1 00:10:49.634 --rc geninfo_all_blocks=1 00:10:49.634 --rc geninfo_unexecuted_blocks=1 00:10:49.634 00:10:49.634 ' 00:10:49.634 14:43:27 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.634 --rc genhtml_branch_coverage=1 00:10:49.634 --rc genhtml_function_coverage=1 00:10:49.634 --rc genhtml_legend=1 00:10:49.634 --rc geninfo_all_blocks=1 00:10:49.634 --rc geninfo_unexecuted_blocks=1 00:10:49.634 00:10:49.634 ' 00:10:49.634 14:43:27 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.159 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.159 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.159 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.159 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.159 14:43:28 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:50.159 14:43:28 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:50.159 14:43:28 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:50.159 14:43:28 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.159 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:50.160 14:43:28 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:50.160 14:43:28 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:50.160 14:43:28 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:50.160 14:43:28 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:50.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.677 Waiting for block devices as requested 00:10:50.677 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.677 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.677 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.936 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.204 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:56.204 14:43:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:56.204 14:43:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:56.204 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:56.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.204 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:56.462 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:56.721 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.721 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:56.721 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:56.721 14:43:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68092 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:56.980 14:43:34 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:56.980 14:43:34 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:56.980 14:43:34 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:56.980 14:43:34 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:56.980 14:43:34 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:56.980 14:43:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:57.238 Initializing NVMe Controllers 00:10:57.238 Attaching to 0000:00:10.0 00:10:57.238 Attaching to 0000:00:11.0 00:10:57.238 Attached to 0000:00:10.0 00:10:57.238 Attached to 0000:00:11.0 00:10:57.238 Initialization complete. Starting I/O... 00:10:57.238 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:57.238 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:57.238 00:10:58.172 QEMU NVMe Ctrl (12340 ): 2752 I/Os completed (+2752) 00:10:58.172 QEMU NVMe Ctrl (12341 ): 2708 I/Os completed (+2708) 00:10:58.172 00:10:59.108 QEMU NVMe Ctrl (12340 ): 6385 I/Os completed (+3633) 00:10:59.108 QEMU NVMe Ctrl (12341 ): 6299 I/Os completed (+3591) 00:10:59.108 00:11:00.074 QEMU NVMe Ctrl (12340 ): 9349 I/Os completed (+2964) 00:11:00.074 QEMU NVMe Ctrl (12341 ): 9213 I/Os completed (+2914) 00:11:00.074 00:11:01.009 QEMU NVMe Ctrl (12340 ): 12904 I/Os completed (+3555) 00:11:01.009 QEMU NVMe Ctrl (12341 ): 12737 I/Os completed (+3524) 00:11:01.009 00:11:02.382 QEMU NVMe Ctrl (12340 ): 16099 I/Os completed (+3195) 00:11:02.382 QEMU NVMe Ctrl (12341 ): 15905 I/Os completed (+3168) 00:11:02.382 00:11:02.949 14:43:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.949 14:43:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.949 14:43:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.949 [2024-12-09 14:43:40.926249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.949 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:02.949 [2024-12-09 14:43:40.927252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.927298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.927314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.927329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:02.949 [2024-12-09 14:43:40.929014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.929060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.929073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.929087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 14:43:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.949 14:43:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.949 [2024-12-09 14:43:40.949583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:02.949 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:02.949 [2024-12-09 14:43:40.950450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.950488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.950506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.950519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:02.949 [2024-12-09 14:43:40.951896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.951927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.951941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.949 [2024-12-09 14:43:40.951952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.950 14:43:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:02.950 14:43:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:02.950 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.950 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.950 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.208 Attaching to 0000:00:10.0 00:11:03.208 Attached to 0000:00:10.0 00:11:03.208 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:03.208 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.208 14:43:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.208 Attaching to 0000:00:11.0 00:11:03.208 Attached to 0000:00:11.0 00:11:04.146 QEMU NVMe Ctrl (12340 ): 3545 I/Os completed (+3545) 00:11:04.146 QEMU NVMe Ctrl (12341 ): 3224 I/Os completed (+3224) 00:11:04.146 00:11:05.081 QEMU NVMe Ctrl (12340 ): 6882 I/Os completed (+3337) 00:11:05.081 QEMU NVMe Ctrl (12341 ): 6485 I/Os completed (+3261) 00:11:05.081 00:11:06.016 QEMU NVMe Ctrl (12340 ): 10934 I/Os completed (+4052) 00:11:06.016 QEMU NVMe Ctrl (12341 ): 11154 I/Os completed (+4669) 00:11:06.016 00:11:07.393 QEMU NVMe Ctrl (12340 ): 14532 I/Os completed (+3598) 00:11:07.393 QEMU NVMe Ctrl (12341 ): 14764 I/Os completed (+3610) 00:11:07.393 00:11:08.335 QEMU NVMe Ctrl (12340 ): 18040 I/Os completed (+3508) 00:11:08.335 QEMU NVMe Ctrl (12341 ): 18198 I/Os completed (+3434) 00:11:08.335 00:11:09.270 QEMU NVMe Ctrl (12340 ): 21576 I/Os completed (+3536) 00:11:09.270 QEMU NVMe Ctrl (12341 ): 21760 I/Os completed (+3562) 00:11:09.270 00:11:10.210 QEMU NVMe Ctrl (12340 ): 25348 I/Os completed (+3772) 00:11:10.210 QEMU NVMe Ctrl (12341 ): 25362 I/Os completed (+3602) 00:11:10.210 00:11:11.145 QEMU NVMe Ctrl (12340 ): 28933 I/Os completed (+3585) 00:11:11.145 QEMU NVMe Ctrl (12341 ): 28971 I/Os completed (+3609) 00:11:11.145 00:11:12.081 QEMU NVMe Ctrl (12340 ): 32461 I/Os completed (+3528) 00:11:12.082 QEMU NVMe Ctrl (12341 ): 32546 I/Os completed (+3575) 00:11:12.082 00:11:13.019 QEMU NVMe Ctrl (12340 ): 36076 I/Os completed (+3615) 00:11:13.019 QEMU NVMe Ctrl (12341 ): 36072 I/Os completed (+3526) 00:11:13.019 00:11:14.396 QEMU NVMe Ctrl (12340 ): 40051 I/Os completed (+3975) 00:11:14.396 QEMU NVMe Ctrl (12341 ): 40010 I/Os completed (+3938) 00:11:14.396 00:11:15.332 QEMU NVMe Ctrl (12340 ): 43642 I/Os completed (+3591) 00:11:15.332 QEMU NVMe Ctrl (12341 ): 43522 I/Os completed (+3512) 00:11:15.332 00:11:15.332 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:15.332 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.332 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.332 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.332 [2024-12-09 14:43:53.193637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:15.332 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:15.332 [2024-12-09 14:43:53.194596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.332 [2024-12-09 14:43:53.194639] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.332 [2024-12-09 14:43:53.194654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.332 [2024-12-09 14:43:53.194671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.332 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:15.333 [2024-12-09 14:43:53.196292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.196332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.196345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.196358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 EAL: Cannot open sysfs resource 00:11:15.333 EAL: pci_scan_one(): cannot parse resource 00:11:15.333 EAL: Scan for (pci) bus failed. 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.333 [2024-12-09 14:43:53.213398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:15.333 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:15.333 [2024-12-09 14:43:53.214286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.214319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.214337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.214351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:15.333 [2024-12-09 14:43:53.215732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.215764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.215776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 [2024-12-09 14:43:53.215790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.333 Attaching to 0000:00:10.0 00:11:15.333 Attached to 0000:00:10.0 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.333 14:43:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.333 Attaching to 0000:00:11.0 00:11:15.333 Attached to 0000:00:11.0 00:11:16.312 QEMU NVMe Ctrl (12340 ): 2740 I/Os completed (+2740) 00:11:16.312 QEMU NVMe Ctrl (12341 ): 2497 I/Os completed (+2497) 00:11:16.312 00:11:17.246 QEMU NVMe Ctrl (12340 ): 6431 I/Os completed (+3691) 00:11:17.246 QEMU NVMe Ctrl (12341 ): 6102 I/Os completed (+3605) 00:11:17.246 00:11:18.193 QEMU NVMe Ctrl (12340 ): 10158 I/Os completed (+3727) 00:11:18.193 QEMU NVMe Ctrl (12341 ): 9812 I/Os completed (+3710) 00:11:18.193 00:11:19.138 QEMU NVMe Ctrl (12340 ): 14042 I/Os completed (+3884) 00:11:19.138 QEMU NVMe Ctrl (12341 ): 13699 I/Os completed (+3887) 00:11:19.138 00:11:20.072 QEMU NVMe Ctrl (12340 ): 17704 I/Os completed (+3662) 00:11:20.072 QEMU NVMe Ctrl (12341 ): 17348 I/Os completed (+3649) 00:11:20.072 00:11:21.006 QEMU NVMe Ctrl (12340 ): 21824 I/Os completed (+4120) 00:11:21.006 QEMU NVMe Ctrl (12341 ): 21812 I/Os completed (+4464) 00:11:21.006 00:11:22.380 QEMU NVMe Ctrl (12340 ): 25022 I/Os completed (+3198) 00:11:22.380 QEMU NVMe Ctrl (12341 ): 24966 I/Os completed (+3154) 00:11:22.380 00:11:23.314 QEMU NVMe Ctrl (12340 ): 28423 I/Os completed (+3401) 00:11:23.314 QEMU NVMe Ctrl (12341 ): 28256 I/Os completed (+3290) 00:11:23.314 00:11:24.246 QEMU NVMe Ctrl (12340 ): 32382 I/Os completed (+3959) 00:11:24.246 QEMU NVMe Ctrl (12341 ): 31843 I/Os completed (+3587) 00:11:24.246 00:11:25.179 QEMU NVMe Ctrl (12340 ): 36264 I/Os completed (+3882) 00:11:25.179 QEMU NVMe Ctrl (12341 ): 35425 I/Os completed (+3582) 00:11:25.179 00:11:26.110 QEMU NVMe Ctrl (12340 ): 40025 I/Os completed (+3761) 00:11:26.110 QEMU NVMe Ctrl (12341 ): 38975 I/Os completed (+3550) 00:11:26.110 00:11:27.043 QEMU NVMe Ctrl (12340 ): 43624 I/Os completed (+3599) 00:11:27.043 QEMU NVMe Ctrl (12341 ): 42515 I/Os completed (+3540) 00:11:27.043 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.609 [2024-12-09 14:44:05.451673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:27.609 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:27.609 [2024-12-09 14:44:05.452653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.452697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.452712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.452728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:27.609 [2024-12-09 14:44:05.454594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.454642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.454659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.454675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.609 [2024-12-09 14:44:05.476095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:27.609 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:27.609 [2024-12-09 14:44:05.476960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.476995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.477013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.477027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:27.609 [2024-12-09 14:44:05.478402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.478433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.478448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 [2024-12-09 14:44:05.478458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.609 EAL: Cannot open sysfs resource 00:11:27.609 EAL: pci_scan_one(): cannot parse resource 00:11:27.609 EAL: Scan for (pci) bus failed. 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.609 Attaching to 0000:00:10.0 00:11:27.609 Attached to 0000:00:10.0 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.609 14:44:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:27.609 Attaching to 0000:00:11.0 00:11:27.609 Attached to 0000:00:11.0 00:11:27.609 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:27.609 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:27.866 [2024-12-09 14:44:05.731382] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:40.105 14:44:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:40.105 14:44:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.105 14:44:17 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:11:40.105 14:44:17 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:11:40.105 14:44:17 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:40.105 14:44:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:11:40.105 14:44:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:11:40.105 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 14:44:17 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68092 00:11:46.684 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68092) - No such process 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68092 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68632 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:46.684 14:44:23 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68632 00:11:46.684 14:44:23 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68632 ']' 00:11:46.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.684 14:44:23 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.684 14:44:23 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.684 14:44:23 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.684 14:44:23 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.684 14:44:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 [2024-12-09 14:44:23.822658] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:11:46.684 [2024-12-09 14:44:23.823347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68632 ] 00:11:46.684 [2024-12-09 14:44:23.985448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.684 [2024-12-09 14:44:24.096423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.684 14:44:24 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.684 14:44:24 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:46.684 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:46.684 14:44:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.684 14:44:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.684 14:44:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.684 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:46.684 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:46.684 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:46.945 14:44:24 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:46.945 14:44:24 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:46.945 14:44:24 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:46.945 14:44:24 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:46.945 14:44:24 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:46.945 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:46.945 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:46.945 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:46.945 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:46.945 14:44:24 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.526 14:44:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.526 14:44:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.526 14:44:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:53.526 14:44:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:53.526 [2024-12-09 14:44:30.897371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:53.526 [2024-12-09 14:44:30.898703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:30.898743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:30.898758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 [2024-12-09 14:44:30.898779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:30.898787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:30.898796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 [2024-12-09 14:44:30.898813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:30.898822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:30.898829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 [2024-12-09 14:44:30.898842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:30.898849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:30.898857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.526 14:44:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.526 14:44:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.526 [2024-12-09 14:44:31.397359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:53.526 [2024-12-09 14:44:31.398576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:31.398609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:31.398621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 [2024-12-09 14:44:31.398635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:31.398644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:31.398651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 [2024-12-09 14:44:31.398660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:31.398667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:31.398675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 [2024-12-09 14:44:31.398682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.526 [2024-12-09 14:44:31.398690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.526 [2024-12-09 14:44:31.398697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.526 [2024-12-09 14:44:31.398710] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:53.526 [2024-12-09 14:44:31.398718] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:53.526 [2024-12-09 14:44:31.398725] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:53.526 [2024-12-09 14:44:31.398730] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:11:53.526 14:44:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.526 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.787 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.787 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.787 14:44:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.061 14:44:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.061 14:44:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.061 14:44:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.061 14:44:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.061 14:44:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.061 14:44:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:06.061 14:44:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.061 [2024-12-09 14:44:43.797577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:06.061 [2024-12-09 14:44:43.798926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-12-09 14:44:43.798963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-12-09 14:44:43.798976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.061 [2024-12-09 14:44:43.798997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-12-09 14:44:43.799005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-12-09 14:44:43.799014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.061 [2024-12-09 14:44:43.799022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-12-09 14:44:43.799031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-12-09 14:44:43.799037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.061 [2024-12-09 14:44:43.799046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.061 [2024-12-09 14:44:43.799053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.061 [2024-12-09 14:44:43.799064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.322 14:44:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.322 14:44:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.322 [2024-12-09 14:44:44.297584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:06.322 [2024-12-09 14:44:44.298913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.322 [2024-12-09 14:44:44.298945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.322 [2024-12-09 14:44:44.298958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.322 [2024-12-09 14:44:44.298974] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.322 [2024-12-09 14:44:44.298982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.322 [2024-12-09 14:44:44.298990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.322 [2024-12-09 14:44:44.299000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.322 [2024-12-09 14:44:44.299006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.322 [2024-12-09 14:44:44.299015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.322 [2024-12-09 14:44:44.299023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.322 [2024-12-09 14:44:44.299031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.322 [2024-12-09 14:44:44.299038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.322 14:44:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:06.322 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.893 14:44:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.893 14:44:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.893 14:44:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:06.893 14:44:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.154 14:44:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.458 14:44:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.458 14:44:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.458 14:44:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.458 [2024-12-09 14:44:57.198396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:19.458 [2024-12-09 14:44:57.199673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.458 [2024-12-09 14:44:57.199707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.458 [2024-12-09 14:44:57.199719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.458 [2024-12-09 14:44:57.199741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.458 [2024-12-09 14:44:57.199748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.458 [2024-12-09 14:44:57.199757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.458 [2024-12-09 14:44:57.199765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.458 [2024-12-09 14:44:57.199774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.458 [2024-12-09 14:44:57.199780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.458 [2024-12-09 14:44:57.199790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.458 [2024-12-09 14:44:57.199796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.458 [2024-12-09 14:44:57.199819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.458 14:44:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.458 14:44:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.458 14:44:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:19.458 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:19.720 [2024-12-09 14:44:57.698536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:19.720 [2024-12-09 14:44:57.699769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.720 [2024-12-09 14:44:57.699814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.720 [2024-12-09 14:44:57.699827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.720 [2024-12-09 14:44:57.699842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.720 [2024-12-09 14:44:57.699852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.720 [2024-12-09 14:44:57.699860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.720 [2024-12-09 14:44:57.699869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.720 [2024-12-09 14:44:57.699876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.720 [2024-12-09 14:44:57.699884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.720 [2024-12-09 14:44:57.699891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.720 [2024-12-09 14:44:57.699899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.720 [2024-12-09 14:44:57.699906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.720 14:44:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.720 14:44:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.720 14:44:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:19.720 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:19.982 14:44:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:19.982 14:44:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:19.982 14:44:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:19.982 14:44:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.25 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.25 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.25 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.25 2 00:12:32.216 remove_attach_helper took 45.25s to complete (handling 2 nvme drive(s)) 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:32.216 14:45:10 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:32.216 14:45:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.797 14:45:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.797 14:45:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.797 14:45:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:38.797 [2024-12-09 14:45:16.182869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:38.797 [2024-12-09 14:45:16.183844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.183875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.183887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 [2024-12-09 14:45:16.183908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.183916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.183925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 [2024-12-09 14:45:16.183932] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.183943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.183949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 [2024-12-09 14:45:16.183959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.183966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.183974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.797 14:45:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.797 14:45:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.797 [2024-12-09 14:45:16.682861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:38.797 [2024-12-09 14:45:16.683787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.683830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.683842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 [2024-12-09 14:45:16.683857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.683866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.683873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 [2024-12-09 14:45:16.683882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.683889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.683898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 [2024-12-09 14:45:16.683906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.797 [2024-12-09 14:45:16.683916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.797 [2024-12-09 14:45:16.683923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.797 14:45:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:38.797 14:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.369 14:45:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.369 14:45:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.369 14:45:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:39.369 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:39.630 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:39.630 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:39.630 14:45:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.862 14:45:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.862 14:45:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.862 14:45:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.862 14:45:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.862 14:45:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.862 [2024-12-09 14:45:29.583062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:51.862 [2024-12-09 14:45:29.584165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.862 [2024-12-09 14:45:29.584269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.862 [2024-12-09 14:45:29.584326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.862 [2024-12-09 14:45:29.584383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.862 [2024-12-09 14:45:29.584402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.862 [2024-12-09 14:45:29.584428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.862 [2024-12-09 14:45:29.584481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.862 [2024-12-09 14:45:29.584501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.862 [2024-12-09 14:45:29.584541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.862 [2024-12-09 14:45:29.584566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.862 [2024-12-09 14:45:29.584583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.862 [2024-12-09 14:45:29.584647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.862 14:45:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:51.862 14:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:51.862 [2024-12-09 14:45:29.983058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:51.862 [2024-12-09 14:45:29.984083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.119 [2024-12-09 14:45:29.984177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.120 [2024-12-09 14:45:29.984241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.120 [2024-12-09 14:45:29.984408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.120 [2024-12-09 14:45:29.984430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.120 [2024-12-09 14:45:29.984453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.120 [2024-12-09 14:45:29.984479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.120 [2024-12-09 14:45:29.984494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.120 [2024-12-09 14:45:29.984552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.120 [2024-12-09 14:45:29.984577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.120 [2024-12-09 14:45:29.984595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.120 [2024-12-09 14:45:29.984617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.120 14:45:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.120 14:45:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:45:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:52.120 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:52.379 14:45:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.632 14:45:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.632 14:45:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:04.632 14:45:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.632 14:45:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.632 14:45:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:04.632 [2024-12-09 14:45:42.483256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:04.632 [2024-12-09 14:45:42.484278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.632 [2024-12-09 14:45:42.484375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.632 [2024-12-09 14:45:42.484435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.632 [2024-12-09 14:45:42.484493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.632 [2024-12-09 14:45:42.484511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.632 [2024-12-09 14:45:42.484537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.632 [2024-12-09 14:45:42.484561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.632 [2024-12-09 14:45:42.484578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.632 [2024-12-09 14:45:42.484602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.632 [2024-12-09 14:45:42.484697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.632 [2024-12-09 14:45:42.484715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.632 [2024-12-09 14:45:42.484777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.632 14:45:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:04.632 14:45:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:04.891 [2024-12-09 14:45:42.883248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:04.891 [2024-12-09 14:45:42.884251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.891 [2024-12-09 14:45:42.884348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.891 [2024-12-09 14:45:42.884408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.891 [2024-12-09 14:45:42.884467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.891 [2024-12-09 14:45:42.884488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.891 [2024-12-09 14:45:42.884538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.891 [2024-12-09 14:45:42.884567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.891 [2024-12-09 14:45:42.884675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.891 [2024-12-09 14:45:42.884729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.891 [2024-12-09 14:45:42.884753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.891 [2024-12-09 14:45:42.884771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.891 [2024-12-09 14:45:42.884793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.891 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:04.891 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:04.891 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:04.891 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.891 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.891 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.891 14:45:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.891 14:45:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.151 14:45:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:05.151 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:05.410 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:05.410 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:05.410 14:45:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.23 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.23 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:13:17.636 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:17.636 14:45:55 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68632 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68632 ']' 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68632 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68632 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68632' 00:13:17.636 killing process with pid 68632 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68632 00:13:17.636 14:45:55 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68632 00:13:18.580 14:45:56 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:18.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:19.418 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:19.418 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:19.716 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.716 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.716 ************************************ 00:13:19.716 END TEST sw_hotplug 00:13:19.716 ************************************ 00:13:19.716 00:13:19.716 real 2m30.147s 00:13:19.716 user 1m52.829s 00:13:19.716 sys 0m15.968s 00:13:19.716 14:45:57 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.716 14:45:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.716 14:45:57 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:19.716 14:45:57 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:19.716 14:45:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:19.716 14:45:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.716 14:45:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.716 ************************************ 00:13:19.716 START TEST nvme_xnvme 00:13:19.716 ************************************ 00:13:19.716 14:45:57 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:19.716 * Looking for test storage... 00:13:19.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:19.716 14:45:57 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:19.716 14:45:57 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:19.716 14:45:57 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.986 14:45:57 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.986 --rc genhtml_branch_coverage=1 00:13:19.986 --rc genhtml_function_coverage=1 00:13:19.986 --rc genhtml_legend=1 00:13:19.986 --rc geninfo_all_blocks=1 00:13:19.986 --rc geninfo_unexecuted_blocks=1 00:13:19.986 00:13:19.986 ' 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.986 --rc genhtml_branch_coverage=1 00:13:19.986 --rc genhtml_function_coverage=1 00:13:19.986 --rc genhtml_legend=1 00:13:19.986 --rc geninfo_all_blocks=1 00:13:19.986 --rc geninfo_unexecuted_blocks=1 00:13:19.986 00:13:19.986 ' 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.986 --rc genhtml_branch_coverage=1 00:13:19.986 --rc genhtml_function_coverage=1 00:13:19.986 --rc genhtml_legend=1 00:13:19.986 --rc geninfo_all_blocks=1 00:13:19.986 --rc geninfo_unexecuted_blocks=1 00:13:19.986 00:13:19.986 ' 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.986 --rc genhtml_branch_coverage=1 00:13:19.986 --rc genhtml_function_coverage=1 00:13:19.986 --rc genhtml_legend=1 00:13:19.986 --rc geninfo_all_blocks=1 00:13:19.986 --rc geninfo_unexecuted_blocks=1 00:13:19.986 00:13:19.986 ' 00:13:19.986 14:45:57 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:19.986 14:45:57 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:19.986 14:45:57 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:19.986 14:45:57 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:19.987 14:45:57 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:19.987 14:45:57 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:19.987 14:45:57 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:19.987 #define SPDK_CONFIG_H 00:13:19.987 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:19.987 #define SPDK_CONFIG_APPS 1 00:13:19.987 #define SPDK_CONFIG_ARCH native 00:13:19.987 #define SPDK_CONFIG_ASAN 1 00:13:19.987 #undef SPDK_CONFIG_AVAHI 00:13:19.987 #undef SPDK_CONFIG_CET 00:13:19.987 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:19.987 #define SPDK_CONFIG_COVERAGE 1 00:13:19.987 #define SPDK_CONFIG_CROSS_PREFIX 00:13:19.987 #undef SPDK_CONFIG_CRYPTO 00:13:19.987 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:19.987 #undef SPDK_CONFIG_CUSTOMOCF 00:13:19.987 #undef SPDK_CONFIG_DAOS 00:13:19.987 #define SPDK_CONFIG_DAOS_DIR 00:13:19.987 #define SPDK_CONFIG_DEBUG 1 00:13:19.987 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:19.987 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:19.987 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:19.987 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:19.987 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:19.987 #undef SPDK_CONFIG_DPDK_UADK 00:13:19.987 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:19.987 #define SPDK_CONFIG_EXAMPLES 1 00:13:19.987 #undef SPDK_CONFIG_FC 00:13:19.987 #define SPDK_CONFIG_FC_PATH 00:13:19.987 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:19.987 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:19.987 #define SPDK_CONFIG_FSDEV 1 00:13:19.987 #undef SPDK_CONFIG_FUSE 00:13:19.987 #undef SPDK_CONFIG_FUZZER 00:13:19.987 #define SPDK_CONFIG_FUZZER_LIB 00:13:19.987 #undef SPDK_CONFIG_GOLANG 00:13:19.987 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:19.987 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:19.987 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:19.987 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:19.987 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:19.987 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:19.987 #undef SPDK_CONFIG_HAVE_LZ4 00:13:19.987 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:19.987 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:19.987 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:19.987 #define SPDK_CONFIG_IDXD 1 00:13:19.988 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:19.988 #undef SPDK_CONFIG_IPSEC_MB 00:13:19.988 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:19.988 #define SPDK_CONFIG_ISAL 1 00:13:19.988 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:19.988 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:19.988 #define SPDK_CONFIG_LIBDIR 00:13:19.988 #undef SPDK_CONFIG_LTO 00:13:19.988 #define SPDK_CONFIG_MAX_LCORES 128 00:13:19.988 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:19.988 #define SPDK_CONFIG_NVME_CUSE 1 00:13:19.988 #undef SPDK_CONFIG_OCF 00:13:19.988 #define SPDK_CONFIG_OCF_PATH 00:13:19.988 #define SPDK_CONFIG_OPENSSL_PATH 00:13:19.988 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:19.988 #define SPDK_CONFIG_PGO_DIR 00:13:19.988 #undef SPDK_CONFIG_PGO_USE 00:13:19.988 #define SPDK_CONFIG_PREFIX /usr/local 00:13:19.988 #undef SPDK_CONFIG_RAID5F 00:13:19.988 #undef SPDK_CONFIG_RBD 00:13:19.988 #define SPDK_CONFIG_RDMA 1 00:13:19.988 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:19.988 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:19.988 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:19.988 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:19.988 #define SPDK_CONFIG_SHARED 1 00:13:19.988 #undef SPDK_CONFIG_SMA 00:13:19.988 #define SPDK_CONFIG_TESTS 1 00:13:19.988 #undef SPDK_CONFIG_TSAN 00:13:19.988 #define SPDK_CONFIG_UBLK 1 00:13:19.988 #define SPDK_CONFIG_UBSAN 1 00:13:19.988 #undef SPDK_CONFIG_UNIT_TESTS 00:13:19.988 #undef SPDK_CONFIG_URING 00:13:19.988 #define SPDK_CONFIG_URING_PATH 00:13:19.988 #undef SPDK_CONFIG_URING_ZNS 00:13:19.988 #undef SPDK_CONFIG_USDT 00:13:19.988 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:19.988 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:19.988 #undef SPDK_CONFIG_VFIO_USER 00:13:19.988 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:19.988 #define SPDK_CONFIG_VHOST 1 00:13:19.988 #define SPDK_CONFIG_VIRTIO 1 00:13:19.988 #undef SPDK_CONFIG_VTUNE 00:13:19.988 #define SPDK_CONFIG_VTUNE_DIR 00:13:19.988 #define SPDK_CONFIG_WERROR 1 00:13:19.988 #define SPDK_CONFIG_WPDK_DIR 00:13:19.988 #define SPDK_CONFIG_XNVME 1 00:13:19.988 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:19.988 14:45:57 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.988 14:45:57 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.988 14:45:57 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.988 14:45:57 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.988 14:45:57 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.988 14:45:57 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.988 14:45:57 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.988 14:45:57 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.988 14:45:57 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:19.988 14:45:57 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:19.988 14:45:57 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:19.988 14:45:57 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:19.989 14:45:57 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69995 ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69995 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.cCAhKU 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.cCAhKU/tests/xnvme /tmp/spdk.cCAhKU 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975179264 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593165824 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975179264 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593165824 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98067775488 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1635004416 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:19.990 * Looking for test storage... 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975179264 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:19.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:19.990 14:45:57 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:19.990 14:45:58 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:19.990 14:45:58 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.990 14:45:58 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.990 14:45:58 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.990 14:45:58 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.990 14:45:58 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:19.991 14:45:58 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.991 14:45:58 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:19.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.991 --rc genhtml_branch_coverage=1 00:13:19.991 --rc genhtml_function_coverage=1 00:13:19.991 --rc genhtml_legend=1 00:13:19.991 --rc geninfo_all_blocks=1 00:13:19.991 --rc geninfo_unexecuted_blocks=1 00:13:19.991 00:13:19.991 ' 00:13:19.991 14:45:58 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:19.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.991 --rc genhtml_branch_coverage=1 00:13:19.991 --rc genhtml_function_coverage=1 00:13:19.991 --rc genhtml_legend=1 00:13:19.991 --rc geninfo_all_blocks=1 00:13:19.991 --rc geninfo_unexecuted_blocks=1 00:13:19.991 00:13:19.991 ' 00:13:19.991 14:45:58 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:19.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.991 --rc genhtml_branch_coverage=1 00:13:19.991 --rc genhtml_function_coverage=1 00:13:19.991 --rc genhtml_legend=1 00:13:19.991 --rc geninfo_all_blocks=1 00:13:19.991 --rc geninfo_unexecuted_blocks=1 00:13:19.991 00:13:19.991 ' 00:13:19.991 14:45:58 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:19.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.991 --rc genhtml_branch_coverage=1 00:13:19.991 --rc genhtml_function_coverage=1 00:13:19.991 --rc genhtml_legend=1 00:13:19.991 --rc geninfo_all_blocks=1 00:13:19.991 --rc geninfo_unexecuted_blocks=1 00:13:19.991 00:13:19.991 ' 00:13:19.991 14:45:58 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.991 14:45:58 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.991 14:45:58 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.991 14:45:58 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.991 14:45:58 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.991 14:45:58 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:19.991 14:45:58 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:19.991 14:45:58 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:20.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.561 Waiting for block devices as requested 00:13:20.561 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.823 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.823 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.823 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.108 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:26.108 14:46:03 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:26.370 14:46:04 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:26.370 14:46:04 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:26.631 14:46:04 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:26.631 14:46:04 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:26.631 No valid GPT data, bailing 00:13:26.631 14:46:04 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:26.631 14:46:04 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:26.631 14:46:04 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:26.631 14:46:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:26.631 14:46:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:26.631 14:46:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.631 14:46:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.631 ************************************ 00:13:26.631 START TEST xnvme_rpc 00:13:26.631 ************************************ 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70384 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70384 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70384 ']' 00:13:26.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.631 14:46:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:26.893 [2024-12-09 14:46:04.832899] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:13:26.893 [2024-12-09 14:46:04.833056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70384 ] 00:13:26.893 [2024-12-09 14:46:04.995397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.155 [2024-12-09 14:46:05.128915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.728 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.729 xnvme_bdev 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.729 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.990 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70384 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70384 ']' 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70384 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.991 14:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70384 00:13:27.991 14:46:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.991 killing process with pid 70384 00:13:27.991 14:46:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.991 14:46:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70384' 00:13:27.991 14:46:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70384 00:13:27.991 14:46:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70384 00:13:29.907 00:13:29.907 real 0m2.933s 00:13:29.907 user 0m2.915s 00:13:29.907 sys 0m0.494s 00:13:29.907 14:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.907 ************************************ 00:13:29.907 END TEST xnvme_rpc 00:13:29.907 ************************************ 00:13:29.907 14:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.907 14:46:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:29.907 14:46:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:29.907 14:46:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.907 14:46:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.907 ************************************ 00:13:29.907 START TEST xnvme_bdevperf 00:13:29.907 ************************************ 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:29.907 14:46:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:29.907 { 00:13:29.907 "subsystems": [ 00:13:29.907 { 00:13:29.907 "subsystem": "bdev", 00:13:29.907 "config": [ 00:13:29.907 { 00:13:29.907 "params": { 00:13:29.907 "io_mechanism": "libaio", 00:13:29.907 "conserve_cpu": false, 00:13:29.907 "filename": "/dev/nvme0n1", 00:13:29.907 "name": "xnvme_bdev" 00:13:29.907 }, 00:13:29.907 "method": "bdev_xnvme_create" 00:13:29.907 }, 00:13:29.907 { 00:13:29.907 "method": "bdev_wait_for_examine" 00:13:29.907 } 00:13:29.907 ] 00:13:29.907 } 00:13:29.907 ] 00:13:29.907 } 00:13:29.907 [2024-12-09 14:46:07.823305] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:13:29.907 [2024-12-09 14:46:07.823458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70458 ] 00:13:29.907 [2024-12-09 14:46:07.983638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.169 [2024-12-09 14:46:08.111497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.430 Running I/O for 5 seconds... 00:13:32.320 26450.00 IOPS, 103.32 MiB/s [2024-12-09T14:46:11.831Z] 26366.00 IOPS, 102.99 MiB/s [2024-12-09T14:46:12.776Z] 25826.67 IOPS, 100.89 MiB/s [2024-12-09T14:46:13.737Z] 25695.25 IOPS, 100.37 MiB/s 00:13:35.615 Latency(us) 00:13:35.615 [2024-12-09T14:46:13.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.615 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:35.615 xnvme_bdev : 5.00 25714.77 100.45 0.00 0.00 2483.90 466.31 7108.14 00:13:35.615 [2024-12-09T14:46:13.737Z] =================================================================================================================== 00:13:35.615 [2024-12-09T14:46:13.737Z] Total : 25714.77 100.45 0.00 0.00 2483.90 466.31 7108.14 00:13:36.559 14:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.559 14:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:36.559 14:46:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:36.559 14:46:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.559 14:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:36.559 { 00:13:36.559 "subsystems": [ 00:13:36.559 { 00:13:36.559 "subsystem": "bdev", 00:13:36.559 "config": [ 00:13:36.559 { 00:13:36.559 "params": { 00:13:36.559 "io_mechanism": "libaio", 00:13:36.559 "conserve_cpu": false, 00:13:36.559 "filename": "/dev/nvme0n1", 00:13:36.559 "name": "xnvme_bdev" 00:13:36.559 }, 00:13:36.559 "method": "bdev_xnvme_create" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "method": "bdev_wait_for_examine" 00:13:36.559 } 00:13:36.559 ] 00:13:36.559 } 00:13:36.559 ] 00:13:36.559 } 00:13:36.559 [2024-12-09 14:46:14.397025] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:13:36.559 [2024-12-09 14:46:14.397181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70536 ] 00:13:36.559 [2024-12-09 14:46:14.565127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.820 [2024-12-09 14:46:14.701947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.081 Running I/O for 5 seconds... 00:13:38.967 30669.00 IOPS, 119.80 MiB/s [2024-12-09T14:46:18.474Z] 30360.50 IOPS, 118.60 MiB/s [2024-12-09T14:46:19.417Z] 31361.00 IOPS, 122.50 MiB/s [2024-12-09T14:46:20.360Z] 31371.50 IOPS, 122.54 MiB/s 00:13:42.238 Latency(us) 00:13:42.238 [2024-12-09T14:46:20.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.238 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:42.238 xnvme_bdev : 5.00 31204.72 121.89 0.00 0.00 2046.64 482.07 9628.75 00:13:42.238 [2024-12-09T14:46:20.360Z] =================================================================================================================== 00:13:42.238 [2024-12-09T14:46:20.360Z] Total : 31204.72 121.89 0.00 0.00 2046.64 482.07 9628.75 00:13:43.180 00:13:43.180 real 0m13.211s 00:13:43.180 user 0m4.822s 00:13:43.180 sys 0m6.465s 00:13:43.180 14:46:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.180 ************************************ 00:13:43.180 END TEST xnvme_bdevperf 00:13:43.180 ************************************ 00:13:43.180 14:46:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.180 14:46:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:43.180 14:46:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.180 14:46:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.180 14:46:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.180 ************************************ 00:13:43.180 START TEST xnvme_fio_plugin 00:13:43.180 ************************************ 00:13:43.180 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:43.180 14:46:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:43.180 14:46:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:43.180 14:46:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.180 14:46:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.180 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.180 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.181 14:46:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.181 { 00:13:43.181 "subsystems": [ 00:13:43.181 { 00:13:43.181 "subsystem": "bdev", 00:13:43.181 "config": [ 00:13:43.181 { 00:13:43.181 "params": { 00:13:43.181 "io_mechanism": "libaio", 00:13:43.181 "conserve_cpu": false, 00:13:43.181 "filename": "/dev/nvme0n1", 00:13:43.181 "name": "xnvme_bdev" 00:13:43.181 }, 00:13:43.181 "method": "bdev_xnvme_create" 00:13:43.181 }, 00:13:43.181 { 00:13:43.181 "method": "bdev_wait_for_examine" 00:13:43.181 } 00:13:43.181 ] 00:13:43.181 } 00:13:43.181 ] 00:13:43.181 } 00:13:43.181 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:43.181 fio-3.35 00:13:43.181 Starting 1 thread 00:13:49.770 00:13:49.770 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70660: Mon Dec 9 14:46:27 2024 00:13:49.770 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec) 00:13:49.770 slat (usec): min=4, max=2123, avg=22.90, stdev=93.11 00:13:49.770 clat (usec): min=105, max=4826, avg=1309.97, stdev=554.21 00:13:49.770 lat (usec): min=182, max=5117, avg=1332.87, stdev=546.52 00:13:49.770 clat percentiles (usec): 00:13:49.770 | 1.00th=[ 269], 5.00th=[ 469], 10.00th=[ 627], 20.00th=[ 848], 00:13:49.770 | 30.00th=[ 1012], 40.00th=[ 1156], 50.00th=[ 1287], 60.00th=[ 1418], 00:13:49.770 | 70.00th=[ 1549], 80.00th=[ 1729], 90.00th=[ 1975], 95.00th=[ 2212], 00:13:49.770 | 99.00th=[ 3032], 99.50th=[ 3359], 99.90th=[ 4047], 99.95th=[ 4228], 00:13:49.770 | 99.99th=[ 4555] 00:13:49.770 bw ( KiB/s): min=122472, max=147104, per=99.90%, avg=132673.33, stdev=8028.15, samples=9 00:13:49.770 iops : min=30618, max=36776, avg=33168.33, stdev=2007.04, samples=9 00:13:49.770 lat (usec) : 250=0.79%, 500=5.10%, 750=9.56%, 1000=13.57% 00:13:49.770 lat (msec) : 2=61.57%, 4=9.28%, 10=0.13% 00:13:49.770 cpu : usr=37.48%, sys=52.78%, ctx=34, majf=0, minf=764 00:13:49.770 IO depths : 1=0.4%, 2=1.0%, 4=2.9%, 8=8.4%, 16=23.8%, 32=61.4%, >=64=2.1% 00:13:49.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.770 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:49.770 issued rwts: total=166046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:49.770 00:13:49.770 Run status group 0 (all jobs): 00:13:49.770 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (680MB), run=5001-5001msec 00:13:50.031 ----------------------------------------------------- 00:13:50.031 Suppressions used: 00:13:50.031 count bytes template 00:13:50.031 1 11 /usr/src/fio/parse.c 00:13:50.031 1 8 libtcmalloc_minimal.so 00:13:50.031 1 904 libcrypto.so 00:13:50.031 ----------------------------------------------------- 00:13:50.031 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:50.031 14:46:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.031 { 00:13:50.031 "subsystems": [ 00:13:50.031 { 00:13:50.031 "subsystem": "bdev", 00:13:50.031 "config": [ 00:13:50.031 { 00:13:50.031 "params": { 00:13:50.031 "io_mechanism": "libaio", 00:13:50.031 "conserve_cpu": false, 00:13:50.031 "filename": "/dev/nvme0n1", 00:13:50.031 "name": "xnvme_bdev" 00:13:50.031 }, 00:13:50.031 "method": "bdev_xnvme_create" 00:13:50.031 }, 00:13:50.031 { 00:13:50.031 "method": "bdev_wait_for_examine" 00:13:50.031 } 00:13:50.031 ] 00:13:50.031 } 00:13:50.031 ] 00:13:50.031 } 00:13:50.291 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:50.291 fio-3.35 00:13:50.291 Starting 1 thread 00:13:56.923 00:13:56.923 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70752: Mon Dec 9 14:46:34 2024 00:13:56.923 write: IOPS=33.1k, BW=129MiB/s (136MB/s)(651MiB/5027msec); 0 zone resets 00:13:56.923 slat (usec): min=4, max=1908, avg=22.02, stdev=75.77 00:13:56.923 clat (usec): min=11, max=55786, avg=1355.84, stdev=1939.01 00:13:56.923 lat (usec): min=78, max=55791, avg=1377.87, stdev=1937.55 00:13:56.923 clat percentiles (usec): 00:13:56.923 | 1.00th=[ 239], 5.00th=[ 388], 10.00th=[ 510], 20.00th=[ 701], 00:13:56.923 | 30.00th=[ 848], 40.00th=[ 971], 50.00th=[ 1106], 60.00th=[ 1237], 00:13:56.923 | 70.00th=[ 1401], 80.00th=[ 1598], 90.00th=[ 1975], 95.00th=[ 2573], 00:13:56.923 | 99.00th=[ 7046], 99.50th=[ 9110], 99.90th=[33817], 99.95th=[35390], 00:13:56.923 | 99.99th=[47449] 00:13:56.924 bw ( KiB/s): min=84592, max=158928, per=100.00%, avg=133121.90, stdev=21179.08, samples=10 00:13:56.924 iops : min=21148, max=39732, avg=33280.40, stdev=5294.79, samples=10 00:13:56.924 lat (usec) : 20=0.01%, 50=0.01%, 100=0.04%, 250=1.10%, 500=8.37% 00:13:56.924 lat (usec) : 750=13.59%, 1000=18.98% 00:13:56.924 lat (msec) : 2=48.45%, 4=7.24%, 10=1.86%, 20=0.11%, 50=0.25% 00:13:56.924 lat (msec) : 100=0.01% 00:13:56.924 cpu : usr=38.64%, sys=48.19%, ctx=34, majf=0, minf=765 00:13:56.924 IO depths : 1=0.2%, 2=0.7%, 4=2.3%, 8=7.4%, 16=22.6%, 32=64.2%, >=64=2.6% 00:13:56.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.924 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:56.924 issued rwts: total=0,166591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.924 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:56.924 00:13:56.924 Run status group 0 (all jobs): 00:13:56.924 WRITE: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=651MiB (682MB), run=5027-5027msec 00:13:57.189 ----------------------------------------------------- 00:13:57.189 Suppressions used: 00:13:57.189 count bytes template 00:13:57.189 1 11 /usr/src/fio/parse.c 00:13:57.189 1 8 libtcmalloc_minimal.so 00:13:57.190 1 904 libcrypto.so 00:13:57.190 ----------------------------------------------------- 00:13:57.190 00:13:57.190 00:13:57.190 real 0m14.112s 00:13:57.190 user 0m6.779s 00:13:57.190 sys 0m5.792s 00:13:57.190 14:46:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.190 ************************************ 00:13:57.190 END TEST xnvme_fio_plugin 00:13:57.190 ************************************ 00:13:57.190 14:46:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.190 14:46:35 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:57.190 14:46:35 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:57.190 14:46:35 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:57.190 14:46:35 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:57.190 14:46:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.190 14:46:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.190 14:46:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.190 ************************************ 00:13:57.190 START TEST xnvme_rpc 00:13:57.190 ************************************ 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70838 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70838 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70838 ']' 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.190 14:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:57.190 [2024-12-09 14:46:35.309988] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:13:57.190 [2024-12-09 14:46:35.310150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70838 ] 00:13:57.450 [2024-12-09 14:46:35.471608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.710 [2024-12-09 14:46:35.611113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.651 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 xnvme_bdev 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70838 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70838 ']' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70838 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70838 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.652 killing process with pid 70838 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70838' 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70838 00:13:58.652 14:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70838 00:14:00.569 00:14:00.569 real 0m3.220s 00:14:00.569 user 0m3.100s 00:14:00.569 sys 0m0.585s 00:14:00.569 14:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.569 ************************************ 00:14:00.569 END TEST xnvme_rpc 00:14:00.569 ************************************ 00:14:00.569 14:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 14:46:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:00.569 14:46:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.569 14:46:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.569 14:46:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 ************************************ 00:14:00.569 START TEST xnvme_bdevperf 00:14:00.569 ************************************ 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.569 14:46:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.569 { 00:14:00.569 "subsystems": [ 00:14:00.569 { 00:14:00.569 "subsystem": "bdev", 00:14:00.569 "config": [ 00:14:00.569 { 00:14:00.569 "params": { 00:14:00.569 "io_mechanism": "libaio", 00:14:00.569 "conserve_cpu": true, 00:14:00.569 "filename": "/dev/nvme0n1", 00:14:00.569 "name": "xnvme_bdev" 00:14:00.569 }, 00:14:00.569 "method": "bdev_xnvme_create" 00:14:00.569 }, 00:14:00.569 { 00:14:00.569 "method": "bdev_wait_for_examine" 00:14:00.569 } 00:14:00.569 ] 00:14:00.569 } 00:14:00.569 ] 00:14:00.569 } 00:14:00.569 [2024-12-09 14:46:38.590322] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:00.569 [2024-12-09 14:46:38.590487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70912 ] 00:14:00.829 [2024-12-09 14:46:38.754936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.829 [2024-12-09 14:46:38.898028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.401 Running I/O for 5 seconds... 00:14:03.285 32129.00 IOPS, 125.50 MiB/s [2024-12-09T14:46:42.351Z] 31414.00 IOPS, 122.71 MiB/s [2024-12-09T14:46:43.295Z] 31415.67 IOPS, 122.72 MiB/s [2024-12-09T14:46:44.681Z] 30905.50 IOPS, 120.72 MiB/s [2024-12-09T14:46:44.681Z] 31317.00 IOPS, 122.33 MiB/s 00:14:06.559 Latency(us) 00:14:06.559 [2024-12-09T14:46:44.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.559 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:06.559 xnvme_bdev : 5.00 31302.59 122.28 0.00 0.00 2040.16 305.62 13712.15 00:14:06.559 [2024-12-09T14:46:44.681Z] =================================================================================================================== 00:14:06.559 [2024-12-09T14:46:44.681Z] Total : 31302.59 122.28 0.00 0.00 2040.16 305.62 13712.15 00:14:07.129 14:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.129 14:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:07.129 14:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.129 14:46:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.129 14:46:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.129 { 00:14:07.129 "subsystems": [ 00:14:07.129 { 00:14:07.129 "subsystem": "bdev", 00:14:07.129 "config": [ 00:14:07.129 { 00:14:07.129 "params": { 00:14:07.129 "io_mechanism": "libaio", 00:14:07.129 "conserve_cpu": true, 00:14:07.129 "filename": "/dev/nvme0n1", 00:14:07.129 "name": "xnvme_bdev" 00:14:07.129 }, 00:14:07.129 "method": "bdev_xnvme_create" 00:14:07.129 }, 00:14:07.129 { 00:14:07.129 "method": "bdev_wait_for_examine" 00:14:07.129 } 00:14:07.129 ] 00:14:07.129 } 00:14:07.129 ] 00:14:07.129 } 00:14:07.129 [2024-12-09 14:46:45.231009] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:07.129 [2024-12-09 14:46:45.231164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70993 ] 00:14:07.390 [2024-12-09 14:46:45.400266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.650 [2024-12-09 14:46:45.544627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.911 Running I/O for 5 seconds... 00:14:09.796 3433.00 IOPS, 13.41 MiB/s [2024-12-09T14:46:49.299Z] 5349.50 IOPS, 20.90 MiB/s [2024-12-09T14:46:50.241Z] 4752.33 IOPS, 18.56 MiB/s [2024-12-09T14:46:51.180Z] 4449.50 IOPS, 17.38 MiB/s [2024-12-09T14:46:51.180Z] 4274.80 IOPS, 16.70 MiB/s 00:14:13.058 Latency(us) 00:14:13.058 [2024-12-09T14:46:51.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.058 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:13.058 xnvme_bdev : 5.02 4270.45 16.68 0.00 0.00 14952.62 60.65 39926.55 00:14:13.058 [2024-12-09T14:46:51.180Z] =================================================================================================================== 00:14:13.058 [2024-12-09T14:46:51.180Z] Total : 4270.45 16.68 0.00 0.00 14952.62 60.65 39926.55 00:14:13.998 00:14:13.998 real 0m13.300s 00:14:13.998 user 0m8.374s 00:14:13.998 sys 0m3.745s 00:14:13.998 14:46:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.998 14:46:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.998 ************************************ 00:14:13.998 END TEST xnvme_bdevperf 00:14:13.998 ************************************ 00:14:13.998 14:46:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:13.998 14:46:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.998 14:46:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.998 14:46:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.998 ************************************ 00:14:13.998 START TEST xnvme_fio_plugin 00:14:13.998 ************************************ 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:13.998 14:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.998 { 00:14:13.998 "subsystems": [ 00:14:13.998 { 00:14:13.998 "subsystem": "bdev", 00:14:13.998 "config": [ 00:14:13.998 { 00:14:13.998 "params": { 00:14:13.998 "io_mechanism": "libaio", 00:14:13.998 "conserve_cpu": true, 00:14:13.998 "filename": "/dev/nvme0n1", 00:14:13.998 "name": "xnvme_bdev" 00:14:13.998 }, 00:14:13.998 "method": "bdev_xnvme_create" 00:14:13.998 }, 00:14:13.998 { 00:14:13.998 "method": "bdev_wait_for_examine" 00:14:13.998 } 00:14:13.998 ] 00:14:13.998 } 00:14:13.998 ] 00:14:13.998 } 00:14:13.998 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:13.998 fio-3.35 00:14:13.998 Starting 1 thread 00:14:20.588 00:14:20.588 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71113: Mon Dec 9 14:46:57 2024 00:14:20.588 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(648MiB/5001msec) 00:14:20.588 slat (usec): min=4, max=2222, avg=21.93, stdev=92.33 00:14:20.588 clat (usec): min=105, max=5785, avg=1330.65, stdev=559.66 00:14:20.588 lat (usec): min=190, max=5906, avg=1352.58, stdev=553.42 00:14:20.588 clat percentiles (usec): 00:14:20.588 | 1.00th=[ 273], 5.00th=[ 498], 10.00th=[ 652], 20.00th=[ 881], 00:14:20.588 | 30.00th=[ 1045], 40.00th=[ 1172], 50.00th=[ 1303], 60.00th=[ 1418], 00:14:20.588 | 70.00th=[ 1549], 80.00th=[ 1713], 90.00th=[ 1975], 95.00th=[ 2278], 00:14:20.588 | 99.00th=[ 3163], 99.50th=[ 3490], 99.90th=[ 4113], 99.95th=[ 4424], 00:14:20.588 | 99.99th=[ 5276] 00:14:20.588 bw ( KiB/s): min=126792, max=150072, per=100.00%, avg=132843.70, stdev=6865.64, samples=10 00:14:20.588 iops : min=31698, max=37518, avg=33210.90, stdev=1716.24, samples=10 00:14:20.588 lat (usec) : 250=0.72%, 500=4.29%, 750=8.77%, 1000=12.95% 00:14:20.588 lat (msec) : 2=63.87%, 4=9.26%, 10=0.14% 00:14:20.588 cpu : usr=40.38%, sys=50.30%, ctx=28, majf=0, minf=764 00:14:20.588 IO depths : 1=0.5%, 2=1.2%, 4=3.2%, 8=8.9%, 16=23.4%, 32=60.7%, >=64=2.1% 00:14:20.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.588 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:14:20.588 issued rwts: total=165958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:20.588 00:14:20.588 Run status group 0 (all jobs): 00:14:20.588 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=648MiB (680MB), run=5001-5001msec 00:14:20.849 ----------------------------------------------------- 00:14:20.849 Suppressions used: 00:14:20.849 count bytes template 00:14:20.849 1 11 /usr/src/fio/parse.c 00:14:20.849 1 8 libtcmalloc_minimal.so 00:14:20.849 1 904 libcrypto.so 00:14:20.849 ----------------------------------------------------- 00:14:20.849 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:20.850 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:21.111 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:21.111 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:21.111 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:21.111 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:21.111 14:46:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:21.111 { 00:14:21.111 "subsystems": [ 00:14:21.111 { 00:14:21.111 "subsystem": "bdev", 00:14:21.111 "config": [ 00:14:21.111 { 00:14:21.111 "params": { 00:14:21.111 "io_mechanism": "libaio", 00:14:21.111 "conserve_cpu": true, 00:14:21.111 "filename": "/dev/nvme0n1", 00:14:21.111 "name": "xnvme_bdev" 00:14:21.111 }, 00:14:21.111 "method": "bdev_xnvme_create" 00:14:21.111 }, 00:14:21.111 { 00:14:21.111 "method": "bdev_wait_for_examine" 00:14:21.111 } 00:14:21.111 ] 00:14:21.111 } 00:14:21.111 ] 00:14:21.111 } 00:14:21.111 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:21.111 fio-3.35 00:14:21.111 Starting 1 thread 00:14:27.700 00:14:27.700 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71206: Mon Dec 9 14:47:04 2024 00:14:27.700 write: IOPS=31.1k, BW=121MiB/s (127MB/s)(607MiB/5001msec); 0 zone resets 00:14:27.700 slat (usec): min=4, max=1800, avg=26.88, stdev=90.59 00:14:27.700 clat (usec): min=106, max=5405, avg=1313.31, stdev=598.65 00:14:27.700 lat (usec): min=181, max=5413, avg=1340.19, stdev=592.78 00:14:27.700 clat percentiles (usec): 00:14:27.700 | 1.00th=[ 255], 5.00th=[ 429], 10.00th=[ 594], 20.00th=[ 799], 00:14:27.700 | 30.00th=[ 979], 40.00th=[ 1123], 50.00th=[ 1270], 60.00th=[ 1401], 00:14:27.700 | 70.00th=[ 1565], 80.00th=[ 1762], 90.00th=[ 2057], 95.00th=[ 2376], 00:14:27.700 | 99.00th=[ 3130], 99.50th=[ 3392], 99.90th=[ 4047], 99.95th=[ 4293], 00:14:27.700 | 99.99th=[ 5080] 00:14:27.700 bw ( KiB/s): min=115840, max=142443, per=100.00%, avg=125072.33, stdev=7495.97, samples=9 00:14:27.700 iops : min=28960, max=35610, avg=31268.00, stdev=1873.78, samples=9 00:14:27.700 lat (usec) : 250=0.92%, 500=5.95%, 750=10.44%, 1000=14.20% 00:14:27.700 lat (msec) : 2=57.10%, 4=11.26%, 10=0.12% 00:14:27.700 cpu : usr=30.56%, sys=58.70%, ctx=12, majf=0, minf=765 00:14:27.700 IO depths : 1=0.3%, 2=0.8%, 4=2.9%, 8=9.0%, 16=25.0%, 32=60.0%, >=64=2.0% 00:14:27.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.700 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:27.700 issued rwts: total=0,155456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.700 00:14:27.700 Run status group 0 (all jobs): 00:14:27.700 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=607MiB (637MB), run=5001-5001msec 00:14:27.960 ----------------------------------------------------- 00:14:27.960 Suppressions used: 00:14:27.960 count bytes template 00:14:27.960 1 11 /usr/src/fio/parse.c 00:14:27.960 1 8 libtcmalloc_minimal.so 00:14:27.960 1 904 libcrypto.so 00:14:27.960 ----------------------------------------------------- 00:14:27.960 00:14:27.960 00:14:27.960 real 0m14.126s 00:14:27.960 user 0m6.578s 00:14:27.960 sys 0m6.163s 00:14:27.960 14:47:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.960 14:47:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:27.960 ************************************ 00:14:27.960 END TEST xnvme_fio_plugin 00:14:27.960 ************************************ 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:27.960 14:47:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:27.960 14:47:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.960 14:47:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.960 14:47:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.960 ************************************ 00:14:27.960 START TEST xnvme_rpc 00:14:27.960 ************************************ 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71292 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71292 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71292 ']' 00:14:27.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.960 14:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.961 14:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:27.961 14:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.961 14:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.961 14:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.961 14:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.221 [2024-12-09 14:47:06.175660] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:28.221 [2024-12-09 14:47:06.175859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71292 ] 00:14:28.221 [2024-12-09 14:47:06.341908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.483 [2024-12-09 14:47:06.482853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.506 xnvme_bdev 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71292 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71292 ']' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71292 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71292 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.506 killing process with pid 71292 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71292' 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71292 00:14:29.506 14:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71292 00:14:31.430 ************************************ 00:14:31.430 END TEST xnvme_rpc 00:14:31.430 ************************************ 00:14:31.430 00:14:31.430 real 0m3.242s 00:14:31.430 user 0m3.127s 00:14:31.430 sys 0m0.600s 00:14:31.430 14:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.430 14:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.430 14:47:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:31.430 14:47:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:31.430 14:47:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.430 14:47:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.430 ************************************ 00:14:31.430 START TEST xnvme_bdevperf 00:14:31.430 ************************************ 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:31.430 14:47:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.430 { 00:14:31.430 "subsystems": [ 00:14:31.430 { 00:14:31.430 "subsystem": "bdev", 00:14:31.430 "config": [ 00:14:31.430 { 00:14:31.430 "params": { 00:14:31.430 "io_mechanism": "io_uring", 00:14:31.430 "conserve_cpu": false, 00:14:31.430 "filename": "/dev/nvme0n1", 00:14:31.430 "name": "xnvme_bdev" 00:14:31.430 }, 00:14:31.430 "method": "bdev_xnvme_create" 00:14:31.430 }, 00:14:31.430 { 00:14:31.430 "method": "bdev_wait_for_examine" 00:14:31.430 } 00:14:31.430 ] 00:14:31.430 } 00:14:31.430 ] 00:14:31.430 } 00:14:31.430 [2024-12-09 14:47:09.473278] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:31.430 [2024-12-09 14:47:09.473439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71366 ] 00:14:31.690 [2024-12-09 14:47:09.641928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.690 [2024-12-09 14:47:09.789166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.260 Running I/O for 5 seconds... 00:14:34.147 34229.00 IOPS, 133.71 MiB/s [2024-12-09T14:47:13.212Z] 34222.50 IOPS, 133.68 MiB/s [2024-12-09T14:47:14.156Z] 34004.67 IOPS, 132.83 MiB/s [2024-12-09T14:47:15.540Z] 33710.00 IOPS, 131.68 MiB/s [2024-12-09T14:47:15.540Z] 33518.80 IOPS, 130.93 MiB/s 00:14:37.418 Latency(us) 00:14:37.418 [2024-12-09T14:47:15.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.418 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:37.418 xnvme_bdev : 5.00 33490.04 130.82 0.00 0.00 1905.95 291.45 12048.54 00:14:37.418 [2024-12-09T14:47:15.540Z] =================================================================================================================== 00:14:37.418 [2024-12-09T14:47:15.540Z] Total : 33490.04 130.82 0.00 0.00 1905.95 291.45 12048.54 00:14:37.990 14:47:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:37.990 14:47:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:37.990 14:47:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:37.990 14:47:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:37.990 14:47:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:37.990 { 00:14:37.990 "subsystems": [ 00:14:37.990 { 00:14:37.990 "subsystem": "bdev", 00:14:37.990 "config": [ 00:14:37.990 { 00:14:37.990 "params": { 00:14:37.990 "io_mechanism": "io_uring", 00:14:37.990 "conserve_cpu": false, 00:14:37.990 "filename": "/dev/nvme0n1", 00:14:37.990 "name": "xnvme_bdev" 00:14:37.990 }, 00:14:37.990 "method": "bdev_xnvme_create" 00:14:37.990 }, 00:14:37.990 { 00:14:37.990 "method": "bdev_wait_for_examine" 00:14:37.990 } 00:14:37.990 ] 00:14:37.990 } 00:14:37.990 ] 00:14:37.990 } 00:14:37.990 [2024-12-09 14:47:16.094212] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:37.990 [2024-12-09 14:47:16.094358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71447 ] 00:14:38.251 [2024-12-09 14:47:16.261890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.513 [2024-12-09 14:47:16.393936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.774 Running I/O for 5 seconds... 00:14:40.682 4927.00 IOPS, 19.25 MiB/s [2024-12-09T14:47:19.748Z] 5154.50 IOPS, 20.13 MiB/s [2024-12-09T14:47:21.131Z] 5245.33 IOPS, 20.49 MiB/s [2024-12-09T14:47:22.064Z] 5323.00 IOPS, 20.79 MiB/s [2024-12-09T14:47:22.064Z] 5929.40 IOPS, 23.16 MiB/s 00:14:43.942 Latency(us) 00:14:43.942 [2024-12-09T14:47:22.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.942 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:43.942 xnvme_bdev : 5.01 5930.92 23.17 0.00 0.00 10777.34 54.74 35086.97 00:14:43.942 [2024-12-09T14:47:22.064Z] =================================================================================================================== 00:14:43.942 [2024-12-09T14:47:22.064Z] Total : 5930.92 23.17 0.00 0.00 10777.34 54.74 35086.97 00:14:44.509 00:14:44.509 real 0m13.105s 00:14:44.509 user 0m5.960s 00:14:44.509 sys 0m6.875s 00:14:44.509 14:47:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.509 ************************************ 00:14:44.509 END TEST xnvme_bdevperf 00:14:44.509 ************************************ 00:14:44.509 14:47:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.509 14:47:22 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:44.509 14:47:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.509 14:47:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.509 14:47:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.509 ************************************ 00:14:44.509 START TEST xnvme_fio_plugin 00:14:44.509 ************************************ 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:44.509 14:47:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.509 { 00:14:44.509 "subsystems": [ 00:14:44.509 { 00:14:44.509 "subsystem": "bdev", 00:14:44.509 "config": [ 00:14:44.509 { 00:14:44.509 "params": { 00:14:44.509 "io_mechanism": "io_uring", 00:14:44.509 "conserve_cpu": false, 00:14:44.509 "filename": "/dev/nvme0n1", 00:14:44.509 "name": "xnvme_bdev" 00:14:44.509 }, 00:14:44.509 "method": "bdev_xnvme_create" 00:14:44.509 }, 00:14:44.509 { 00:14:44.509 "method": "bdev_wait_for_examine" 00:14:44.509 } 00:14:44.509 ] 00:14:44.509 } 00:14:44.509 ] 00:14:44.509 } 00:14:44.767 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:44.767 fio-3.35 00:14:44.767 Starting 1 thread 00:14:51.329 00:14:51.329 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71567: Mon Dec 9 14:47:28 2024 00:14:51.329 read: IOPS=53.4k, BW=208MiB/s (219MB/s)(1043MiB/5006msec) 00:14:51.329 slat (usec): min=2, max=245, avg= 3.62, stdev= 1.73 00:14:51.329 clat (usec): min=120, max=15360, avg=1071.27, stdev=513.14 00:14:51.329 lat (usec): min=123, max=15363, avg=1074.89, stdev=513.30 00:14:51.329 clat percentiles (usec): 00:14:51.329 | 1.00th=[ 627], 5.00th=[ 676], 10.00th=[ 709], 20.00th=[ 766], 00:14:51.329 | 30.00th=[ 807], 40.00th=[ 857], 50.00th=[ 898], 60.00th=[ 979], 00:14:51.329 | 70.00th=[ 1090], 80.00th=[ 1303], 90.00th=[ 1647], 95.00th=[ 1942], 00:14:51.329 | 99.00th=[ 2999], 99.50th=[ 3523], 99.90th=[ 6128], 99.95th=[ 7046], 00:14:51.329 | 99.99th=[ 9634] 00:14:51.329 bw ( KiB/s): min=139432, max=261632, per=100.00%, avg=213628.00, stdev=50771.59, samples=10 00:14:51.329 iops : min=34858, max=65408, avg=53407.00, stdev=12692.90, samples=10 00:14:51.329 lat (usec) : 250=0.02%, 500=0.29%, 750=16.90%, 1000=45.50% 00:14:51.329 lat (msec) : 2=32.94%, 4=4.03%, 10=0.32%, 20=0.01% 00:14:51.329 cpu : usr=36.40%, sys=62.42%, ctx=66, majf=0, minf=762 00:14:51.329 IO depths : 1=1.0%, 2=2.1%, 4=4.7%, 8=10.6%, 16=24.1%, 32=55.5%, >=64=2.0% 00:14:51.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.329 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:14:51.329 issued rwts: total=267098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:51.329 00:14:51.329 Run status group 0 (all jobs): 00:14:51.329 READ: bw=208MiB/s (219MB/s), 208MiB/s-208MiB/s (219MB/s-219MB/s), io=1043MiB (1094MB), run=5006-5006msec 00:14:51.329 ----------------------------------------------------- 00:14:51.329 Suppressions used: 00:14:51.329 count bytes template 00:14:51.329 1 11 /usr/src/fio/parse.c 00:14:51.329 1 8 libtcmalloc_minimal.so 00:14:51.329 1 904 libcrypto.so 00:14:51.329 ----------------------------------------------------- 00:14:51.329 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.329 14:47:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.329 { 00:14:51.329 "subsystems": [ 00:14:51.329 { 00:14:51.329 "subsystem": "bdev", 00:14:51.329 "config": [ 00:14:51.329 { 00:14:51.329 "params": { 00:14:51.329 "io_mechanism": "io_uring", 00:14:51.329 "conserve_cpu": false, 00:14:51.329 "filename": "/dev/nvme0n1", 00:14:51.329 "name": "xnvme_bdev" 00:14:51.329 }, 00:14:51.329 "method": "bdev_xnvme_create" 00:14:51.329 }, 00:14:51.329 { 00:14:51.329 "method": "bdev_wait_for_examine" 00:14:51.329 } 00:14:51.329 ] 00:14:51.329 } 00:14:51.329 ] 00:14:51.329 } 00:14:51.590 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.591 fio-3.35 00:14:51.591 Starting 1 thread 00:14:58.226 00:14:58.226 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71653: Mon Dec 9 14:47:35 2024 00:14:58.226 write: IOPS=37.6k, BW=147MiB/s (154MB/s)(735MiB/5002msec); 0 zone resets 00:14:58.226 slat (nsec): min=2225, max=65484, avg=4085.78, stdev=2202.95 00:14:58.226 clat (usec): min=134, max=10668, avg=1537.33, stdev=453.15 00:14:58.226 lat (usec): min=138, max=10704, avg=1541.42, stdev=453.46 00:14:58.226 clat percentiles (usec): 00:14:58.226 | 1.00th=[ 766], 5.00th=[ 889], 10.00th=[ 996], 20.00th=[ 1172], 00:14:58.226 | 30.00th=[ 1336], 40.00th=[ 1450], 50.00th=[ 1532], 60.00th=[ 1631], 00:14:58.226 | 70.00th=[ 1713], 80.00th=[ 1827], 90.00th=[ 2008], 95.00th=[ 2180], 00:14:58.226 | 99.00th=[ 2606], 99.50th=[ 2933], 99.90th=[ 5866], 99.95th=[ 7504], 00:14:58.226 | 99.99th=[ 9765] 00:14:58.226 bw ( KiB/s): min=132536, max=203264, per=100.00%, avg=152405.33, stdev=27816.60, samples=9 00:14:58.226 iops : min=33134, max=50816, avg=38101.33, stdev=6954.15, samples=9 00:14:58.226 lat (usec) : 250=0.01%, 500=0.03%, 750=0.70%, 1000=9.72% 00:14:58.226 lat (msec) : 2=79.31%, 4=10.00%, 10=0.24%, 20=0.01% 00:14:58.226 cpu : usr=35.97%, sys=62.69%, ctx=9, majf=0, minf=763 00:14:58.226 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.8%, 32=50.6%, >=64=1.6% 00:14:58.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.226 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:58.226 issued rwts: total=0,188245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.226 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.226 00:14:58.226 Run status group 0 (all jobs): 00:14:58.226 WRITE: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=735MiB (771MB), run=5002-5002msec 00:14:58.226 ----------------------------------------------------- 00:14:58.226 Suppressions used: 00:14:58.226 count bytes template 00:14:58.226 1 11 /usr/src/fio/parse.c 00:14:58.226 1 8 libtcmalloc_minimal.so 00:14:58.226 1 904 libcrypto.so 00:14:58.226 ----------------------------------------------------- 00:14:58.226 00:14:58.226 00:14:58.226 real 0m13.799s 00:14:58.226 user 0m6.480s 00:14:58.226 sys 0m6.853s 00:14:58.226 14:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.226 ************************************ 00:14:58.226 END TEST xnvme_fio_plugin 00:14:58.226 ************************************ 00:14:58.226 14:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:58.489 14:47:36 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:58.489 14:47:36 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:58.489 14:47:36 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:58.489 14:47:36 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:58.489 14:47:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.489 14:47:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.489 14:47:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.489 ************************************ 00:14:58.489 START TEST xnvme_rpc 00:14:58.489 ************************************ 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:58.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71739 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71739 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71739 ']' 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.489 14:47:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.489 [2024-12-09 14:47:36.502192] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:14:58.489 [2024-12-09 14:47:36.502537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71739 ] 00:14:58.751 [2024-12-09 14:47:36.666645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.751 [2024-12-09 14:47:36.798321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 xnvme_bdev 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71739 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71739 ']' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71739 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71739 00:14:59.695 killing process with pid 71739 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71739' 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71739 00:14:59.695 14:47:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71739 00:15:01.613 00:15:01.613 real 0m2.974s 00:15:01.613 user 0m2.993s 00:15:01.613 sys 0m0.505s 00:15:01.613 14:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.613 ************************************ 00:15:01.613 END TEST xnvme_rpc 00:15:01.613 ************************************ 00:15:01.613 14:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.613 14:47:39 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:01.613 14:47:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.613 14:47:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.613 14:47:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.613 ************************************ 00:15:01.613 START TEST xnvme_bdevperf 00:15:01.613 ************************************ 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:01.613 14:47:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.613 { 00:15:01.613 "subsystems": [ 00:15:01.613 { 00:15:01.613 "subsystem": "bdev", 00:15:01.613 "config": [ 00:15:01.613 { 00:15:01.613 "params": { 00:15:01.613 "io_mechanism": "io_uring", 00:15:01.613 "conserve_cpu": true, 00:15:01.613 "filename": "/dev/nvme0n1", 00:15:01.613 "name": "xnvme_bdev" 00:15:01.613 }, 00:15:01.613 "method": "bdev_xnvme_create" 00:15:01.613 }, 00:15:01.613 { 00:15:01.613 "method": "bdev_wait_for_examine" 00:15:01.613 } 00:15:01.613 ] 00:15:01.613 } 00:15:01.613 ] 00:15:01.613 } 00:15:01.613 [2024-12-09 14:47:39.526493] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:01.613 [2024-12-09 14:47:39.526644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71812 ] 00:15:01.613 [2024-12-09 14:47:39.692976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.875 [2024-12-09 14:47:39.821356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.136 Running I/O for 5 seconds... 00:15:04.025 33757.00 IOPS, 131.86 MiB/s [2024-12-09T14:47:43.534Z] 33108.50 IOPS, 129.33 MiB/s [2024-12-09T14:47:44.478Z] 33157.67 IOPS, 129.52 MiB/s [2024-12-09T14:47:45.422Z] 33186.50 IOPS, 129.63 MiB/s [2024-12-09T14:47:45.422Z] 33209.80 IOPS, 129.73 MiB/s 00:15:07.300 Latency(us) 00:15:07.300 [2024-12-09T14:47:45.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.300 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:07.300 xnvme_bdev : 5.01 33185.98 129.63 0.00 0.00 1924.14 294.60 14720.39 00:15:07.300 [2024-12-09T14:47:45.422Z] =================================================================================================================== 00:15:07.300 [2024-12-09T14:47:45.422Z] Total : 33185.98 129.63 0.00 0.00 1924.14 294.60 14720.39 00:15:07.872 14:47:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.872 14:47:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:07.872 14:47:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:07.872 14:47:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:07.872 14:47:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:07.872 { 00:15:07.872 "subsystems": [ 00:15:07.872 { 00:15:07.872 "subsystem": "bdev", 00:15:07.872 "config": [ 00:15:07.872 { 00:15:07.872 "params": { 00:15:07.872 "io_mechanism": "io_uring", 00:15:07.872 "conserve_cpu": true, 00:15:07.872 "filename": "/dev/nvme0n1", 00:15:07.872 "name": "xnvme_bdev" 00:15:07.872 }, 00:15:07.872 "method": "bdev_xnvme_create" 00:15:07.872 }, 00:15:07.872 { 00:15:07.872 "method": "bdev_wait_for_examine" 00:15:07.872 } 00:15:07.872 ] 00:15:07.872 } 00:15:07.872 ] 00:15:07.872 } 00:15:07.872 [2024-12-09 14:47:45.985123] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:07.872 [2024-12-09 14:47:45.985272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71889 ] 00:15:08.133 [2024-12-09 14:47:46.152445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.394 [2024-12-09 14:47:46.281103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.656 Running I/O for 5 seconds... 00:15:10.538 10428.00 IOPS, 40.73 MiB/s [2024-12-09T14:47:49.601Z] 10582.00 IOPS, 41.34 MiB/s [2024-12-09T14:47:50.988Z] 10611.00 IOPS, 41.45 MiB/s [2024-12-09T14:47:51.936Z] 10648.50 IOPS, 41.60 MiB/s [2024-12-09T14:47:51.936Z] 10624.00 IOPS, 41.50 MiB/s 00:15:13.814 Latency(us) 00:15:13.814 [2024-12-09T14:47:51.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.814 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:13.814 xnvme_bdev : 5.02 10599.36 41.40 0.00 0.00 6023.27 59.86 26416.05 00:15:13.814 [2024-12-09T14:47:51.936Z] =================================================================================================================== 00:15:13.814 [2024-12-09T14:47:51.936Z] Total : 10599.36 41.40 0.00 0.00 6023.27 59.86 26416.05 00:15:14.415 00:15:14.415 real 0m12.946s 00:15:14.415 user 0m8.949s 00:15:14.415 sys 0m2.988s 00:15:14.415 14:47:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.415 14:47:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.415 ************************************ 00:15:14.415 END TEST xnvme_bdevperf 00:15:14.415 ************************************ 00:15:14.415 14:47:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:14.415 14:47:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.415 14:47:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.415 14:47:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.415 ************************************ 00:15:14.415 START TEST xnvme_fio_plugin 00:15:14.415 ************************************ 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.415 14:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.415 { 00:15:14.415 "subsystems": [ 00:15:14.415 { 00:15:14.415 "subsystem": "bdev", 00:15:14.415 "config": [ 00:15:14.415 { 00:15:14.415 "params": { 00:15:14.415 "io_mechanism": "io_uring", 00:15:14.415 "conserve_cpu": true, 00:15:14.415 "filename": "/dev/nvme0n1", 00:15:14.415 "name": "xnvme_bdev" 00:15:14.415 }, 00:15:14.415 "method": "bdev_xnvme_create" 00:15:14.415 }, 00:15:14.415 { 00:15:14.415 "method": "bdev_wait_for_examine" 00:15:14.415 } 00:15:14.415 ] 00:15:14.415 } 00:15:14.415 ] 00:15:14.415 } 00:15:14.677 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:14.677 fio-3.35 00:15:14.677 Starting 1 thread 00:15:21.266 00:15:21.266 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72008: Mon Dec 9 14:47:58 2024 00:15:21.266 read: IOPS=39.9k, BW=156MiB/s (164MB/s)(780MiB/5002msec) 00:15:21.266 slat (usec): min=2, max=267, avg= 3.53, stdev= 2.14 00:15:21.266 clat (usec): min=730, max=3655, avg=1461.91, stdev=283.11 00:15:21.266 lat (usec): min=732, max=3691, avg=1465.44, stdev=283.59 00:15:21.266 clat percentiles (usec): 00:15:21.266 | 1.00th=[ 955], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1205], 00:15:21.266 | 30.00th=[ 1287], 40.00th=[ 1369], 50.00th=[ 1450], 60.00th=[ 1516], 00:15:21.266 | 70.00th=[ 1582], 80.00th=[ 1680], 90.00th=[ 1827], 95.00th=[ 1975], 00:15:21.266 | 99.00th=[ 2245], 99.50th=[ 2343], 99.90th=[ 2868], 99.95th=[ 3261], 00:15:21.266 | 99.99th=[ 3490] 00:15:21.266 bw ( KiB/s): min=138240, max=188416, per=99.44%, avg=158805.33, stdev=20885.98, samples=9 00:15:21.266 iops : min=34560, max=47104, avg=39701.33, stdev=5221.49, samples=9 00:15:21.266 lat (usec) : 750=0.01%, 1000=1.94% 00:15:21.266 lat (msec) : 2=93.91%, 4=4.14% 00:15:21.266 cpu : usr=57.07%, sys=38.81%, ctx=65, majf=0, minf=762 00:15:21.266 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:21.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.266 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:21.266 issued rwts: total=199712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:21.266 00:15:21.266 Run status group 0 (all jobs): 00:15:21.266 READ: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=780MiB (818MB), run=5002-5002msec 00:15:21.266 ----------------------------------------------------- 00:15:21.266 Suppressions used: 00:15:21.266 count bytes template 00:15:21.266 1 11 /usr/src/fio/parse.c 00:15:21.266 1 8 libtcmalloc_minimal.so 00:15:21.266 1 904 libcrypto.so 00:15:21.266 ----------------------------------------------------- 00:15:21.266 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.266 14:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.266 { 00:15:21.266 "subsystems": [ 00:15:21.266 { 00:15:21.266 "subsystem": "bdev", 00:15:21.266 "config": [ 00:15:21.266 { 00:15:21.266 "params": { 00:15:21.266 "io_mechanism": "io_uring", 00:15:21.266 "conserve_cpu": true, 00:15:21.266 "filename": "/dev/nvme0n1", 00:15:21.266 "name": "xnvme_bdev" 00:15:21.266 }, 00:15:21.266 "method": "bdev_xnvme_create" 00:15:21.266 }, 00:15:21.266 { 00:15:21.266 "method": "bdev_wait_for_examine" 00:15:21.266 } 00:15:21.267 ] 00:15:21.267 } 00:15:21.267 ] 00:15:21.267 } 00:15:21.527 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:21.527 fio-3.35 00:15:21.527 Starting 1 thread 00:15:28.118 00:15:28.118 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72094: Mon Dec 9 14:48:05 2024 00:15:28.118 write: IOPS=36.2k, BW=141MiB/s (148MB/s)(707MiB/5002msec); 0 zone resets 00:15:28.118 slat (nsec): min=2877, max=85007, avg=4223.57, stdev=2261.34 00:15:28.118 clat (usec): min=214, max=9207, avg=1598.22, stdev=266.63 00:15:28.118 lat (usec): min=223, max=9211, avg=1602.44, stdev=267.14 00:15:28.118 clat percentiles (usec): 00:15:28.118 | 1.00th=[ 1074], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1401], 00:15:28.118 | 30.00th=[ 1467], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1631], 00:15:28.118 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1926], 95.00th=[ 2057], 00:15:28.118 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2737], 99.95th=[ 3032], 00:15:28.118 | 99.99th=[ 6915] 00:15:28.118 bw ( KiB/s): min=136704, max=165864, per=100.00%, avg=145373.33, stdev=9120.32, samples=9 00:15:28.118 iops : min=34176, max=41466, avg=36343.33, stdev=2280.08, samples=9 00:15:28.118 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.38% 00:15:28.118 lat (msec) : 2=92.99%, 4=6.60%, 10=0.02% 00:15:28.118 cpu : usr=46.31%, sys=49.29%, ctx=10, majf=0, minf=763 00:15:28.118 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:15:28.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.118 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:28.118 issued rwts: total=0,181017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.118 00:15:28.118 Run status group 0 (all jobs): 00:15:28.118 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=707MiB (741MB), run=5002-5002msec 00:15:28.118 ----------------------------------------------------- 00:15:28.118 Suppressions used: 00:15:28.118 count bytes template 00:15:28.118 1 11 /usr/src/fio/parse.c 00:15:28.118 1 8 libtcmalloc_minimal.so 00:15:28.118 1 904 libcrypto.so 00:15:28.118 ----------------------------------------------------- 00:15:28.118 00:15:28.118 00:15:28.118 real 0m13.659s 00:15:28.118 user 0m7.960s 00:15:28.118 sys 0m4.941s 00:15:28.118 14:48:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.118 14:48:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:28.118 ************************************ 00:15:28.118 END TEST xnvme_fio_plugin 00:15:28.118 ************************************ 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:28.118 14:48:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:28.118 14:48:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.118 14:48:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.118 14:48:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.118 ************************************ 00:15:28.118 START TEST xnvme_rpc 00:15:28.118 ************************************ 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72181 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72181 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72181 ']' 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.118 14:48:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.377 [2024-12-09 14:48:06.291940] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:28.377 [2024-12-09 14:48:06.292095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72181 ] 00:15:28.377 [2024-12-09 14:48:06.457859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.636 [2024-12-09 14:48:06.556141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.202 xnvme_bdev 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:29.202 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.203 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72181 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72181 ']' 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72181 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72181 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.461 killing process with pid 72181 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72181' 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72181 00:15:29.461 14:48:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72181 00:15:30.836 00:15:30.836 real 0m2.417s 00:15:30.836 user 0m2.530s 00:15:30.836 sys 0m0.381s 00:15:30.836 ************************************ 00:15:30.836 END TEST xnvme_rpc 00:15:30.836 ************************************ 00:15:30.836 14:48:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.836 14:48:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.836 14:48:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:30.836 14:48:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:30.836 14:48:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.836 14:48:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:30.836 ************************************ 00:15:30.836 START TEST xnvme_bdevperf 00:15:30.836 ************************************ 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:30.836 14:48:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:30.836 { 00:15:30.836 "subsystems": [ 00:15:30.836 { 00:15:30.836 "subsystem": "bdev", 00:15:30.836 "config": [ 00:15:30.836 { 00:15:30.836 "params": { 00:15:30.836 "io_mechanism": "io_uring_cmd", 00:15:30.836 "conserve_cpu": false, 00:15:30.836 "filename": "/dev/ng0n1", 00:15:30.836 "name": "xnvme_bdev" 00:15:30.836 }, 00:15:30.836 "method": "bdev_xnvme_create" 00:15:30.836 }, 00:15:30.836 { 00:15:30.836 "method": "bdev_wait_for_examine" 00:15:30.836 } 00:15:30.836 ] 00:15:30.836 } 00:15:30.836 ] 00:15:30.836 } 00:15:30.836 [2024-12-09 14:48:08.737522] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:30.836 [2024-12-09 14:48:08.737742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72250 ] 00:15:30.836 [2024-12-09 14:48:08.892648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.094 [2024-12-09 14:48:08.976761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.094 Running I/O for 5 seconds... 00:15:33.416 56520.00 IOPS, 220.78 MiB/s [2024-12-09T14:48:12.520Z] 48502.00 IOPS, 189.46 MiB/s [2024-12-09T14:48:13.459Z] 46078.33 IOPS, 179.99 MiB/s [2024-12-09T14:48:14.396Z] 44328.25 IOPS, 173.16 MiB/s [2024-12-09T14:48:14.396Z] 47186.40 IOPS, 184.32 MiB/s 00:15:36.274 Latency(us) 00:15:36.274 [2024-12-09T14:48:14.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.274 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:36.274 xnvme_bdev : 5.00 47157.29 184.21 0.00 0.00 1353.32 187.47 10233.70 00:15:36.274 [2024-12-09T14:48:14.396Z] =================================================================================================================== 00:15:36.274 [2024-12-09T14:48:14.396Z] Total : 47157.29 184.21 0.00 0.00 1353.32 187.47 10233.70 00:15:36.842 14:48:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:36.842 14:48:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:36.842 14:48:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:36.842 14:48:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:36.842 14:48:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:36.842 { 00:15:36.842 "subsystems": [ 00:15:36.842 { 00:15:36.842 "subsystem": "bdev", 00:15:36.842 "config": [ 00:15:36.842 { 00:15:36.842 "params": { 00:15:36.843 "io_mechanism": "io_uring_cmd", 00:15:36.843 "conserve_cpu": false, 00:15:36.843 "filename": "/dev/ng0n1", 00:15:36.843 "name": "xnvme_bdev" 00:15:36.843 }, 00:15:36.843 "method": "bdev_xnvme_create" 00:15:36.843 }, 00:15:36.843 { 00:15:36.843 "method": "bdev_wait_for_examine" 00:15:36.843 } 00:15:36.843 ] 00:15:36.843 } 00:15:36.843 ] 00:15:36.843 } 00:15:36.843 [2024-12-09 14:48:14.833433] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:36.843 [2024-12-09 14:48:14.833673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72330 ] 00:15:37.104 [2024-12-09 14:48:14.993861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.104 [2024-12-09 14:48:15.111091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.364 Running I/O for 5 seconds... 00:15:39.690 37131.00 IOPS, 145.04 MiB/s [2024-12-09T14:48:18.753Z] 37085.50 IOPS, 144.87 MiB/s [2024-12-09T14:48:19.698Z] 37075.33 IOPS, 144.83 MiB/s [2024-12-09T14:48:20.639Z] 34004.00 IOPS, 132.83 MiB/s [2024-12-09T14:48:20.640Z] 30749.60 IOPS, 120.12 MiB/s 00:15:42.518 Latency(us) 00:15:42.518 [2024-12-09T14:48:20.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.518 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:42.518 xnvme_bdev : 5.01 30705.74 119.94 0.00 0.00 2077.88 81.92 17845.96 00:15:42.518 [2024-12-09T14:48:20.640Z] =================================================================================================================== 00:15:42.518 [2024-12-09T14:48:20.640Z] Total : 30705.74 119.94 0.00 0.00 2077.88 81.92 17845.96 00:15:43.098 14:48:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:43.098 14:48:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:43.098 14:48:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:43.098 14:48:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:43.098 14:48:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:43.362 { 00:15:43.362 "subsystems": [ 00:15:43.362 { 00:15:43.362 "subsystem": "bdev", 00:15:43.362 "config": [ 00:15:43.362 { 00:15:43.362 "params": { 00:15:43.362 "io_mechanism": "io_uring_cmd", 00:15:43.362 "conserve_cpu": false, 00:15:43.362 "filename": "/dev/ng0n1", 00:15:43.362 "name": "xnvme_bdev" 00:15:43.362 }, 00:15:43.362 "method": "bdev_xnvme_create" 00:15:43.362 }, 00:15:43.362 { 00:15:43.362 "method": "bdev_wait_for_examine" 00:15:43.362 } 00:15:43.362 ] 00:15:43.362 } 00:15:43.362 ] 00:15:43.362 } 00:15:43.362 [2024-12-09 14:48:21.278020] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:43.362 [2024-12-09 14:48:21.278164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72400 ] 00:15:43.362 [2024-12-09 14:48:21.445483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.622 [2024-12-09 14:48:21.566303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.882 Running I/O for 5 seconds... 00:15:45.803 78656.00 IOPS, 307.25 MiB/s [2024-12-09T14:48:24.867Z] 78112.00 IOPS, 305.12 MiB/s [2024-12-09T14:48:26.245Z] 76416.00 IOPS, 298.50 MiB/s [2024-12-09T14:48:27.179Z] 78752.00 IOPS, 307.62 MiB/s [2024-12-09T14:48:27.179Z] 82534.40 IOPS, 322.40 MiB/s 00:15:49.057 Latency(us) 00:15:49.057 [2024-12-09T14:48:27.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.057 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:49.057 xnvme_bdev : 5.00 82492.47 322.24 0.00 0.00 772.41 545.08 2886.10 00:15:49.057 [2024-12-09T14:48:27.179Z] =================================================================================================================== 00:15:49.057 [2024-12-09T14:48:27.179Z] Total : 82492.47 322.24 0.00 0.00 772.41 545.08 2886.10 00:15:49.316 14:48:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:49.316 14:48:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:49.316 14:48:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:49.316 14:48:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:49.316 14:48:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:49.574 { 00:15:49.574 "subsystems": [ 00:15:49.574 { 00:15:49.574 "subsystem": "bdev", 00:15:49.574 "config": [ 00:15:49.574 { 00:15:49.574 "params": { 00:15:49.574 "io_mechanism": "io_uring_cmd", 00:15:49.574 "conserve_cpu": false, 00:15:49.574 "filename": "/dev/ng0n1", 00:15:49.574 "name": "xnvme_bdev" 00:15:49.574 }, 00:15:49.574 "method": "bdev_xnvme_create" 00:15:49.574 }, 00:15:49.574 { 00:15:49.574 "method": "bdev_wait_for_examine" 00:15:49.574 } 00:15:49.574 ] 00:15:49.574 } 00:15:49.574 ] 00:15:49.574 } 00:15:49.574 [2024-12-09 14:48:27.483019] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:15:49.574 [2024-12-09 14:48:27.483276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72474 ] 00:15:49.574 [2024-12-09 14:48:27.640295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.832 [2024-12-09 14:48:27.722276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.832 Running I/O for 5 seconds... 00:15:52.149 731.00 IOPS, 2.86 MiB/s [2024-12-09T14:48:31.206Z] 1907.00 IOPS, 7.45 MiB/s [2024-12-09T14:48:32.140Z] 1407.67 IOPS, 5.50 MiB/s [2024-12-09T14:48:33.074Z] 1199.00 IOPS, 4.68 MiB/s [2024-12-09T14:48:33.074Z] 1085.20 IOPS, 4.24 MiB/s 00:15:54.952 Latency(us) 00:15:54.952 [2024-12-09T14:48:33.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.952 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:54.952 xnvme_bdev : 5.11 1075.32 4.20 0.00 0.00 58954.05 85.46 796917.76 00:15:54.952 [2024-12-09T14:48:33.074Z] =================================================================================================================== 00:15:54.952 [2024-12-09T14:48:33.074Z] Total : 1075.32 4.20 0.00 0.00 58954.05 85.46 796917.76 00:15:55.521 00:15:55.521 real 0m24.920s 00:15:55.521 user 0m13.927s 00:15:55.521 sys 0m10.550s 00:15:55.521 ************************************ 00:15:55.521 END TEST xnvme_bdevperf 00:15:55.521 ************************************ 00:15:55.521 14:48:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.521 14:48:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:55.521 14:48:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:55.521 14:48:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:55.521 14:48:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.521 14:48:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.521 ************************************ 00:15:55.521 START TEST xnvme_fio_plugin 00:15:55.521 ************************************ 00:15:55.521 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:55.521 14:48:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:55.521 14:48:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:55.521 14:48:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:55.782 14:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:55.782 { 00:15:55.782 "subsystems": [ 00:15:55.782 { 00:15:55.782 "subsystem": "bdev", 00:15:55.782 "config": [ 00:15:55.782 { 00:15:55.782 "params": { 00:15:55.782 "io_mechanism": "io_uring_cmd", 00:15:55.782 "conserve_cpu": false, 00:15:55.782 "filename": "/dev/ng0n1", 00:15:55.782 "name": "xnvme_bdev" 00:15:55.782 }, 00:15:55.782 "method": "bdev_xnvme_create" 00:15:55.782 }, 00:15:55.782 { 00:15:55.782 "method": "bdev_wait_for_examine" 00:15:55.782 } 00:15:55.782 ] 00:15:55.782 } 00:15:55.782 ] 00:15:55.782 } 00:15:55.782 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:55.782 fio-3.35 00:15:55.782 Starting 1 thread 00:16:02.370 00:16:02.370 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72587: Mon Dec 9 14:48:39 2024 00:16:02.370 read: IOPS=37.1k, BW=145MiB/s (152MB/s)(726MiB/5002msec) 00:16:02.370 slat (usec): min=2, max=176, avg= 3.87, stdev= 2.18 00:16:02.370 clat (usec): min=686, max=3231, avg=1567.36, stdev=309.22 00:16:02.370 lat (usec): min=689, max=3259, avg=1571.24, stdev=309.59 00:16:02.370 clat percentiles (usec): 00:16:02.370 | 1.00th=[ 840], 5.00th=[ 1057], 10.00th=[ 1205], 20.00th=[ 1336], 00:16:02.370 | 30.00th=[ 1418], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1631], 00:16:02.370 | 70.00th=[ 1713], 80.00th=[ 1811], 90.00th=[ 1958], 95.00th=[ 2089], 00:16:02.370 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2704], 99.95th=[ 2868], 00:16:02.370 | 99.99th=[ 3032] 00:16:02.370 bw ( KiB/s): min=142336, max=160256, per=100.00%, avg=149219.56, stdev=6374.92, samples=9 00:16:02.370 iops : min=35584, max=40064, avg=37304.89, stdev=1593.73, samples=9 00:16:02.370 lat (usec) : 750=0.10%, 1000=3.92% 00:16:02.370 lat (msec) : 2=87.92%, 4=8.05% 00:16:02.370 cpu : usr=35.09%, sys=63.55%, ctx=13, majf=0, minf=762 00:16:02.370 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:02.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.370 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:02.370 issued rwts: total=185728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:02.370 00:16:02.370 Run status group 0 (all jobs): 00:16:02.370 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=726MiB (761MB), run=5002-5002msec 00:16:02.370 ----------------------------------------------------- 00:16:02.370 Suppressions used: 00:16:02.370 count bytes template 00:16:02.370 1 11 /usr/src/fio/parse.c 00:16:02.370 1 8 libtcmalloc_minimal.so 00:16:02.370 1 904 libcrypto.so 00:16:02.370 ----------------------------------------------------- 00:16:02.370 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:02.370 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.631 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.631 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.631 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:02.631 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:02.631 14:48:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:02.631 { 00:16:02.631 "subsystems": [ 00:16:02.631 { 00:16:02.631 "subsystem": "bdev", 00:16:02.631 "config": [ 00:16:02.631 { 00:16:02.631 "params": { 00:16:02.631 "io_mechanism": "io_uring_cmd", 00:16:02.631 "conserve_cpu": false, 00:16:02.631 "filename": "/dev/ng0n1", 00:16:02.631 "name": "xnvme_bdev" 00:16:02.631 }, 00:16:02.631 "method": "bdev_xnvme_create" 00:16:02.631 }, 00:16:02.631 { 00:16:02.631 "method": "bdev_wait_for_examine" 00:16:02.631 } 00:16:02.631 ] 00:16:02.631 } 00:16:02.631 ] 00:16:02.631 } 00:16:02.631 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:02.631 fio-3.35 00:16:02.631 Starting 1 thread 00:16:09.220 00:16:09.220 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72683: Mon Dec 9 14:48:46 2024 00:16:09.220 write: IOPS=37.0k, BW=145MiB/s (152MB/s)(724MiB/5001msec); 0 zone resets 00:16:09.220 slat (usec): min=2, max=120, avg= 3.98, stdev= 2.24 00:16:09.220 clat (usec): min=155, max=5238, avg=1566.50, stdev=276.33 00:16:09.220 lat (usec): min=158, max=5241, avg=1570.48, stdev=276.74 00:16:09.220 clat percentiles (usec): 00:16:09.220 | 1.00th=[ 1045], 5.00th=[ 1172], 10.00th=[ 1237], 20.00th=[ 1336], 00:16:09.220 | 30.00th=[ 1418], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1614], 00:16:09.220 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2057], 00:16:09.220 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2933], 99.95th=[ 3195], 00:16:09.220 | 99.99th=[ 4228] 00:16:09.220 bw ( KiB/s): min=140272, max=164072, per=100.00%, avg=149141.56, stdev=8167.67, samples=9 00:16:09.220 iops : min=35068, max=41018, avg=37285.33, stdev=2041.95, samples=9 00:16:09.220 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.35% 00:16:09.220 lat (msec) : 2=93.00%, 4=6.60%, 10=0.01% 00:16:09.220 cpu : usr=37.36%, sys=61.20%, ctx=13, majf=0, minf=763 00:16:09.220 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:16:09.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.220 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:09.220 issued rwts: total=0,185257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:09.220 00:16:09.220 Run status group 0 (all jobs): 00:16:09.220 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=724MiB (759MB), run=5001-5001msec 00:16:09.480 ----------------------------------------------------- 00:16:09.480 Suppressions used: 00:16:09.480 count bytes template 00:16:09.480 1 11 /usr/src/fio/parse.c 00:16:09.480 1 8 libtcmalloc_minimal.so 00:16:09.480 1 904 libcrypto.so 00:16:09.481 ----------------------------------------------------- 00:16:09.481 00:16:09.481 00:16:09.481 real 0m13.870s 00:16:09.481 user 0m6.530s 00:16:09.481 sys 0m6.877s 00:16:09.481 14:48:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.481 14:48:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:09.481 ************************************ 00:16:09.481 END TEST xnvme_fio_plugin 00:16:09.481 ************************************ 00:16:09.481 14:48:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:09.481 14:48:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:09.481 14:48:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:09.481 14:48:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:09.481 14:48:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.481 14:48:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.481 14:48:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.481 ************************************ 00:16:09.481 START TEST xnvme_rpc 00:16:09.481 ************************************ 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:09.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72763 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72763 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72763 ']' 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:09.481 14:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.742 [2024-12-09 14:48:47.680590] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:09.742 [2024-12-09 14:48:47.681062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72763 ] 00:16:09.742 [2024-12-09 14:48:47.846941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.001 [2024-12-09 14:48:47.995253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 xnvme_bdev 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72763 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72763 ']' 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72763 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72763 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.945 killing process with pid 72763 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72763' 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72763 00:16:10.945 14:48:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72763 00:16:12.862 ************************************ 00:16:12.862 END TEST xnvme_rpc 00:16:12.862 ************************************ 00:16:12.862 00:16:12.862 real 0m3.085s 00:16:12.862 user 0m2.991s 00:16:12.862 sys 0m0.589s 00:16:12.862 14:48:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.862 14:48:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.862 14:48:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:12.862 14:48:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.862 14:48:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.862 14:48:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.862 ************************************ 00:16:12.862 START TEST xnvme_bdevperf 00:16:12.862 ************************************ 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:12.862 14:48:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:12.862 { 00:16:12.862 "subsystems": [ 00:16:12.862 { 00:16:12.862 "subsystem": "bdev", 00:16:12.862 "config": [ 00:16:12.862 { 00:16:12.862 "params": { 00:16:12.862 "io_mechanism": "io_uring_cmd", 00:16:12.862 "conserve_cpu": true, 00:16:12.862 "filename": "/dev/ng0n1", 00:16:12.862 "name": "xnvme_bdev" 00:16:12.862 }, 00:16:12.862 "method": "bdev_xnvme_create" 00:16:12.862 }, 00:16:12.862 { 00:16:12.862 "method": "bdev_wait_for_examine" 00:16:12.862 } 00:16:12.862 ] 00:16:12.862 } 00:16:12.862 ] 00:16:12.862 } 00:16:12.862 [2024-12-09 14:48:50.813431] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:12.862 [2024-12-09 14:48:50.813970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72837 ] 00:16:12.862 [2024-12-09 14:48:50.977699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.123 [2024-12-09 14:48:51.099035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.385 Running I/O for 5 seconds... 00:16:15.505 35712.00 IOPS, 139.50 MiB/s [2024-12-09T14:48:54.569Z] 35680.00 IOPS, 139.38 MiB/s [2024-12-09T14:48:55.513Z] 36053.33 IOPS, 140.83 MiB/s [2024-12-09T14:48:56.455Z] 36144.00 IOPS, 141.19 MiB/s 00:16:18.333 Latency(us) 00:16:18.333 [2024-12-09T14:48:56.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.333 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:18.333 xnvme_bdev : 5.00 36255.08 141.62 0.00 0.00 1760.89 932.63 4234.63 00:16:18.333 [2024-12-09T14:48:56.455Z] =================================================================================================================== 00:16:18.333 [2024-12-09T14:48:56.455Z] Total : 36255.08 141.62 0.00 0.00 1760.89 932.63 4234.63 00:16:19.276 14:48:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:19.276 14:48:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:19.276 14:48:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:19.276 14:48:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:19.276 14:48:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:19.276 { 00:16:19.276 "subsystems": [ 00:16:19.276 { 00:16:19.276 "subsystem": "bdev", 00:16:19.276 "config": [ 00:16:19.276 { 00:16:19.276 "params": { 00:16:19.276 "io_mechanism": "io_uring_cmd", 00:16:19.276 "conserve_cpu": true, 00:16:19.276 "filename": "/dev/ng0n1", 00:16:19.276 "name": "xnvme_bdev" 00:16:19.276 }, 00:16:19.276 "method": "bdev_xnvme_create" 00:16:19.276 }, 00:16:19.276 { 00:16:19.276 "method": "bdev_wait_for_examine" 00:16:19.276 } 00:16:19.276 ] 00:16:19.276 } 00:16:19.276 ] 00:16:19.276 } 00:16:19.276 [2024-12-09 14:48:57.270077] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:19.276 [2024-12-09 14:48:57.270548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72911 ] 00:16:19.537 [2024-12-09 14:48:57.437224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.537 [2024-12-09 14:48:57.558111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.798 Running I/O for 5 seconds... 00:16:22.128 43070.00 IOPS, 168.24 MiB/s [2024-12-09T14:49:01.196Z] 37257.00 IOPS, 145.54 MiB/s [2024-12-09T14:49:02.143Z] 34386.00 IOPS, 134.32 MiB/s [2024-12-09T14:49:03.088Z] 32715.75 IOPS, 127.80 MiB/s [2024-12-09T14:49:03.088Z] 31709.40 IOPS, 123.86 MiB/s 00:16:24.966 Latency(us) 00:16:24.966 [2024-12-09T14:49:03.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.966 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:24.966 xnvme_bdev : 5.00 31708.45 123.86 0.00 0.00 2014.18 66.17 20064.10 00:16:24.966 [2024-12-09T14:49:03.088Z] =================================================================================================================== 00:16:24.966 [2024-12-09T14:49:03.088Z] Total : 31708.45 123.86 0.00 0.00 2014.18 66.17 20064.10 00:16:25.539 14:49:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:25.539 14:49:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:25.539 14:49:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:25.539 14:49:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:25.539 14:49:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:25.801 { 00:16:25.801 "subsystems": [ 00:16:25.801 { 00:16:25.801 "subsystem": "bdev", 00:16:25.801 "config": [ 00:16:25.801 { 00:16:25.801 "params": { 00:16:25.801 "io_mechanism": "io_uring_cmd", 00:16:25.801 "conserve_cpu": true, 00:16:25.801 "filename": "/dev/ng0n1", 00:16:25.801 "name": "xnvme_bdev" 00:16:25.801 }, 00:16:25.801 "method": "bdev_xnvme_create" 00:16:25.801 }, 00:16:25.801 { 00:16:25.801 "method": "bdev_wait_for_examine" 00:16:25.801 } 00:16:25.801 ] 00:16:25.801 } 00:16:25.801 ] 00:16:25.801 } 00:16:25.801 [2024-12-09 14:49:03.729050] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:25.801 [2024-12-09 14:49:03.729945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72991 ] 00:16:25.801 [2024-12-09 14:49:03.905121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.063 [2024-12-09 14:49:04.027083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.325 Running I/O for 5 seconds... 00:16:28.212 79040.00 IOPS, 308.75 MiB/s [2024-12-09T14:49:07.719Z] 79072.00 IOPS, 308.88 MiB/s [2024-12-09T14:49:08.654Z] 79040.00 IOPS, 308.75 MiB/s [2024-12-09T14:49:09.589Z] 82704.00 IOPS, 323.06 MiB/s [2024-12-09T14:49:09.589Z] 85337.60 IOPS, 333.35 MiB/s 00:16:31.467 Latency(us) 00:16:31.467 [2024-12-09T14:49:09.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.467 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:31.467 xnvme_bdev : 5.00 85299.05 333.20 0.00 0.00 746.89 393.85 2911.31 00:16:31.467 [2024-12-09T14:49:09.589Z] =================================================================================================================== 00:16:31.467 [2024-12-09T14:49:09.589Z] Total : 85299.05 333.20 0.00 0.00 746.89 393.85 2911.31 00:16:32.035 14:49:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:32.035 14:49:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:32.035 14:49:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:32.035 14:49:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:32.035 14:49:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:32.035 { 00:16:32.035 "subsystems": [ 00:16:32.035 { 00:16:32.035 "subsystem": "bdev", 00:16:32.035 "config": [ 00:16:32.035 { 00:16:32.035 "params": { 00:16:32.035 "io_mechanism": "io_uring_cmd", 00:16:32.035 "conserve_cpu": true, 00:16:32.035 "filename": "/dev/ng0n1", 00:16:32.035 "name": "xnvme_bdev" 00:16:32.035 }, 00:16:32.035 "method": "bdev_xnvme_create" 00:16:32.035 }, 00:16:32.035 { 00:16:32.035 "method": "bdev_wait_for_examine" 00:16:32.035 } 00:16:32.035 ] 00:16:32.035 } 00:16:32.035 ] 00:16:32.035 } 00:16:32.035 [2024-12-09 14:49:09.937545] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:32.035 [2024-12-09 14:49:09.937665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73060 ] 00:16:32.035 [2024-12-09 14:49:10.093877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.293 [2024-12-09 14:49:10.178089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.293 Running I/O for 5 seconds... 00:16:34.614 52828.00 IOPS, 206.36 MiB/s [2024-12-09T14:49:13.676Z] 49720.50 IOPS, 194.22 MiB/s [2024-12-09T14:49:14.616Z] 46839.67 IOPS, 182.97 MiB/s [2024-12-09T14:49:15.555Z] 44576.50 IOPS, 174.13 MiB/s [2024-12-09T14:49:15.555Z] 42730.00 IOPS, 166.91 MiB/s 00:16:37.433 Latency(us) 00:16:37.433 [2024-12-09T14:49:15.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.433 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:37.433 xnvme_bdev : 5.00 42703.71 166.81 0.00 0.00 1493.32 165.42 18249.26 00:16:37.433 [2024-12-09T14:49:15.555Z] =================================================================================================================== 00:16:37.433 [2024-12-09T14:49:15.555Z] Total : 42703.71 166.81 0.00 0.00 1493.32 165.42 18249.26 00:16:38.379 00:16:38.379 real 0m25.415s 00:16:38.379 user 0m16.120s 00:16:38.379 sys 0m6.686s 00:16:38.379 14:49:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.379 14:49:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.379 ************************************ 00:16:38.379 END TEST xnvme_bdevperf 00:16:38.379 ************************************ 00:16:38.379 14:49:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:38.379 14:49:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.379 14:49:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.379 14:49:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.379 ************************************ 00:16:38.379 START TEST xnvme_fio_plugin 00:16:38.379 ************************************ 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:38.379 14:49:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:38.379 { 00:16:38.379 "subsystems": [ 00:16:38.379 { 00:16:38.379 "subsystem": "bdev", 00:16:38.379 "config": [ 00:16:38.379 { 00:16:38.379 "params": { 00:16:38.379 "io_mechanism": "io_uring_cmd", 00:16:38.379 "conserve_cpu": true, 00:16:38.379 "filename": "/dev/ng0n1", 00:16:38.379 "name": "xnvme_bdev" 00:16:38.379 }, 00:16:38.379 "method": "bdev_xnvme_create" 00:16:38.379 }, 00:16:38.379 { 00:16:38.379 "method": "bdev_wait_for_examine" 00:16:38.379 } 00:16:38.379 ] 00:16:38.379 } 00:16:38.379 ] 00:16:38.379 } 00:16:38.379 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:38.379 fio-3.35 00:16:38.379 Starting 1 thread 00:16:45.056 00:16:45.056 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73178: Mon Dec 9 14:49:22 2024 00:16:45.056 read: IOPS=37.9k, BW=148MiB/s (155MB/s)(740MiB/5002msec) 00:16:45.056 slat (usec): min=2, max=317, avg= 3.74, stdev= 2.53 00:16:45.056 clat (usec): min=908, max=3253, avg=1538.96, stdev=262.39 00:16:45.056 lat (usec): min=911, max=3288, avg=1542.69, stdev=262.87 00:16:45.056 clat percentiles (usec): 00:16:45.056 | 1.00th=[ 1090], 5.00th=[ 1172], 10.00th=[ 1237], 20.00th=[ 1319], 00:16:45.056 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1500], 60.00th=[ 1565], 00:16:45.056 | 70.00th=[ 1647], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2008], 00:16:45.056 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2737], 99.95th=[ 2802], 00:16:45.056 | 99.99th=[ 2999] 00:16:45.056 bw ( KiB/s): min=139264, max=164864, per=99.11%, avg=150215.11, stdev=7800.42, samples=9 00:16:45.056 iops : min=34816, max=41216, avg=37553.78, stdev=1950.10, samples=9 00:16:45.056 lat (usec) : 1000=0.09% 00:16:45.056 lat (msec) : 2=94.49%, 4=5.42% 00:16:45.056 cpu : usr=53.95%, sys=42.41%, ctx=44, majf=0, minf=762 00:16:45.056 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:45.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.056 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:45.056 issued rwts: total=189536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.056 00:16:45.056 Run status group 0 (all jobs): 00:16:45.056 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=740MiB (776MB), run=5002-5002msec 00:16:45.056 ----------------------------------------------------- 00:16:45.056 Suppressions used: 00:16:45.056 count bytes template 00:16:45.056 1 11 /usr/src/fio/parse.c 00:16:45.056 1 8 libtcmalloc_minimal.so 00:16:45.056 1 904 libcrypto.so 00:16:45.056 ----------------------------------------------------- 00:16:45.056 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:45.056 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:45.317 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:45.317 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:45.317 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:45.317 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:45.318 14:49:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:45.318 { 00:16:45.318 "subsystems": [ 00:16:45.318 { 00:16:45.318 "subsystem": "bdev", 00:16:45.318 "config": [ 00:16:45.318 { 00:16:45.318 "params": { 00:16:45.318 "io_mechanism": "io_uring_cmd", 00:16:45.318 "conserve_cpu": true, 00:16:45.318 "filename": "/dev/ng0n1", 00:16:45.318 "name": "xnvme_bdev" 00:16:45.318 }, 00:16:45.318 "method": "bdev_xnvme_create" 00:16:45.318 }, 00:16:45.318 { 00:16:45.318 "method": "bdev_wait_for_examine" 00:16:45.318 } 00:16:45.318 ] 00:16:45.318 } 00:16:45.318 ] 00:16:45.318 } 00:16:45.318 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:45.318 fio-3.35 00:16:45.318 Starting 1 thread 00:16:51.905 00:16:51.905 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73263: Mon Dec 9 14:49:29 2024 00:16:51.905 write: IOPS=39.5k, BW=154MiB/s (162MB/s)(771MiB/5001msec); 0 zone resets 00:16:51.905 slat (usec): min=2, max=228, avg= 4.00, stdev= 2.38 00:16:51.905 clat (usec): min=558, max=5541, avg=1461.64, stdev=265.96 00:16:51.905 lat (usec): min=561, max=5548, avg=1465.64, stdev=266.51 00:16:51.905 clat percentiles (usec): 00:16:51.905 | 1.00th=[ 1020], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1237], 00:16:51.905 | 30.00th=[ 1303], 40.00th=[ 1369], 50.00th=[ 1434], 60.00th=[ 1500], 00:16:51.905 | 70.00th=[ 1565], 80.00th=[ 1663], 90.00th=[ 1795], 95.00th=[ 1926], 00:16:51.905 | 99.00th=[ 2245], 99.50th=[ 2376], 99.90th=[ 3064], 99.95th=[ 3326], 00:16:51.905 | 99.99th=[ 4113] 00:16:51.905 bw ( KiB/s): min=149040, max=171184, per=99.40%, avg=156967.22, stdev=8321.51, samples=9 00:16:51.905 iops : min=37260, max=42796, avg=39241.78, stdev=2080.38, samples=9 00:16:51.905 lat (usec) : 750=0.01%, 1000=0.60% 00:16:51.905 lat (msec) : 2=95.93%, 4=3.46%, 10=0.02% 00:16:51.905 cpu : usr=54.27%, sys=40.29%, ctx=12, majf=0, minf=763 00:16:51.905 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.5%, >=64=1.7% 00:16:51.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.905 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:51.905 issued rwts: total=0,197435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.905 00:16:51.905 Run status group 0 (all jobs): 00:16:51.905 WRITE: bw=154MiB/s (162MB/s), 154MiB/s-154MiB/s (162MB/s-162MB/s), io=771MiB (809MB), run=5001-5001msec 00:16:52.165 ----------------------------------------------------- 00:16:52.165 Suppressions used: 00:16:52.165 count bytes template 00:16:52.165 1 11 /usr/src/fio/parse.c 00:16:52.165 1 8 libtcmalloc_minimal.so 00:16:52.165 1 904 libcrypto.so 00:16:52.165 ----------------------------------------------------- 00:16:52.165 00:16:52.165 00:16:52.165 real 0m13.844s 00:16:52.165 user 0m8.283s 00:16:52.165 sys 0m4.793s 00:16:52.165 14:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.165 ************************************ 00:16:52.165 END TEST xnvme_fio_plugin 00:16:52.165 ************************************ 00:16:52.165 14:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 Process with pid 72763 is not found 00:16:52.165 14:49:30 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72763 00:16:52.165 14:49:30 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72763 ']' 00:16:52.165 14:49:30 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72763 00:16:52.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72763) - No such process 00:16:52.165 14:49:30 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72763 is not found' 00:16:52.165 00:16:52.165 real 3m32.402s 00:16:52.165 user 1m59.232s 00:16:52.165 sys 1m17.383s 00:16:52.165 ************************************ 00:16:52.165 END TEST nvme_xnvme 00:16:52.165 ************************************ 00:16:52.165 14:49:30 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.165 14:49:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 14:49:30 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:52.165 14:49:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.165 14:49:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.165 14:49:30 -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 ************************************ 00:16:52.165 START TEST blockdev_xnvme 00:16:52.165 ************************************ 00:16:52.165 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:52.165 * Looking for test storage... 00:16:52.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:52.165 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:52.165 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:52.165 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:52.426 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.426 14:49:30 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:52.426 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.427 --rc genhtml_branch_coverage=1 00:16:52.427 --rc genhtml_function_coverage=1 00:16:52.427 --rc genhtml_legend=1 00:16:52.427 --rc geninfo_all_blocks=1 00:16:52.427 --rc geninfo_unexecuted_blocks=1 00:16:52.427 00:16:52.427 ' 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.427 --rc genhtml_branch_coverage=1 00:16:52.427 --rc genhtml_function_coverage=1 00:16:52.427 --rc genhtml_legend=1 00:16:52.427 --rc geninfo_all_blocks=1 00:16:52.427 --rc geninfo_unexecuted_blocks=1 00:16:52.427 00:16:52.427 ' 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.427 --rc genhtml_branch_coverage=1 00:16:52.427 --rc genhtml_function_coverage=1 00:16:52.427 --rc genhtml_legend=1 00:16:52.427 --rc geninfo_all_blocks=1 00:16:52.427 --rc geninfo_unexecuted_blocks=1 00:16:52.427 00:16:52.427 ' 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.427 --rc genhtml_branch_coverage=1 00:16:52.427 --rc genhtml_function_coverage=1 00:16:52.427 --rc genhtml_legend=1 00:16:52.427 --rc geninfo_all_blocks=1 00:16:52.427 --rc geninfo_unexecuted_blocks=1 00:16:52.427 00:16:52.427 ' 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73403 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:52.427 14:49:30 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73403 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73403 ']' 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.427 14:49:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.427 [2024-12-09 14:49:30.438602] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:52.427 [2024-12-09 14:49:30.439000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73403 ] 00:16:52.687 [2024-12-09 14:49:30.602517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.687 [2024-12-09 14:49:30.721372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.628 14:49:31 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.628 14:49:31 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:53.628 14:49:31 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:53.628 14:49:31 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:16:53.628 14:49:31 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:53.628 14:49:31 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:53.628 14:49:31 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:53.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:54.458 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:54.458 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:54.458 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:54.458 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:54.458 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:54.458 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.459 14:49:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.459 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:54.459 nvme0n1 00:16:54.459 nvme0n2 00:16:54.459 nvme0n3 00:16:54.459 nvme1n1 00:16:54.719 nvme2n1 00:16:54.719 nvme3n1 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.719 14:49:32 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:54.719 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:54.720 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4d7da76b-33c3-4fed-ab21-5317410cbb12"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4d7da76b-33c3-4fed-ab21-5317410cbb12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "624c33c9-0fa9-4aab-bf41-ea688c44cb6a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "624c33c9-0fa9-4aab-bf41-ea688c44cb6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "61c57f11-f087-4c4e-a231-cac0e7ac63b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61c57f11-f087-4c4e-a231-cac0e7ac63b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8bb1a770-5ccf-478a-bb20-93658a0f57bb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8bb1a770-5ccf-478a-bb20-93658a0f57bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a7520b81-da6b-45a0-8b97-62ed6bc7469f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a7520b81-da6b-45a0-8b97-62ed6bc7469f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "fe89d82b-c27b-4d0f-bab4-648cddf0a9cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fe89d82b-c27b-4d0f-bab4-648cddf0a9cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:54.720 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:54.720 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:54.720 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:54.720 14:49:32 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73403 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73403 ']' 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73403 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73403 00:16:54.720 killing process with pid 73403 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73403' 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73403 00:16:54.720 14:49:32 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73403 00:16:56.632 14:49:34 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:56.632 14:49:34 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:56.632 14:49:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:56.632 14:49:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.632 14:49:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.632 ************************************ 00:16:56.632 START TEST bdev_hello_world 00:16:56.632 ************************************ 00:16:56.632 14:49:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:56.632 [2024-12-09 14:49:34.516070] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:56.632 [2024-12-09 14:49:34.516218] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73682 ] 00:16:56.632 [2024-12-09 14:49:34.679078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.893 [2024-12-09 14:49:34.803409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.154 [2024-12-09 14:49:35.209723] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:57.154 [2024-12-09 14:49:35.210028] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:57.154 [2024-12-09 14:49:35.210060] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:57.154 [2024-12-09 14:49:35.212268] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:57.154 [2024-12-09 14:49:35.213413] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:57.154 [2024-12-09 14:49:35.213483] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:57.154 [2024-12-09 14:49:35.214112] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:57.154 00:16:57.154 [2024-12-09 14:49:35.214184] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:58.096 00:16:58.096 real 0m1.568s 00:16:58.096 user 0m1.209s 00:16:58.096 sys 0m0.211s 00:16:58.096 14:49:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.096 ************************************ 00:16:58.096 END TEST bdev_hello_world 00:16:58.096 ************************************ 00:16:58.096 14:49:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:58.096 14:49:36 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:58.096 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.096 14:49:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.096 14:49:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.096 ************************************ 00:16:58.096 START TEST bdev_bounds 00:16:58.096 ************************************ 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:58.096 Process bdevio pid: 73718 00:16:58.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73718 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73718' 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73718 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73718 ']' 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.096 14:49:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.097 14:49:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:58.097 [2024-12-09 14:49:36.162274] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:16:58.097 [2024-12-09 14:49:36.162639] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73718 ] 00:16:58.357 [2024-12-09 14:49:36.327685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:58.357 [2024-12-09 14:49:36.457076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.357 [2024-12-09 14:49:36.457577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.357 [2024-12-09 14:49:36.457459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.927 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.928 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:58.928 14:49:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:59.189 I/O targets: 00:16:59.189 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:59.189 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:59.189 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:59.189 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:59.189 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:59.189 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:59.189 00:16:59.189 00:16:59.189 CUnit - A unit testing framework for C - Version 2.1-3 00:16:59.189 http://cunit.sourceforge.net/ 00:16:59.189 00:16:59.189 00:16:59.189 Suite: bdevio tests on: nvme3n1 00:16:59.189 Test: blockdev write read block ...passed 00:16:59.189 Test: blockdev write zeroes read block ...passed 00:16:59.189 Test: blockdev write zeroes read no split ...passed 00:16:59.189 Test: blockdev write zeroes read split ...passed 00:16:59.189 Test: blockdev write zeroes read split partial ...passed 00:16:59.189 Test: blockdev reset ...passed 00:16:59.189 Test: blockdev write read 8 blocks ...passed 00:16:59.189 Test: blockdev write read size > 128k ...passed 00:16:59.189 Test: blockdev write read invalid size ...passed 00:16:59.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.189 Test: blockdev write read max offset ...passed 00:16:59.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.189 Test: blockdev writev readv 8 blocks ...passed 00:16:59.189 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.189 Test: blockdev writev readv block ...passed 00:16:59.189 Test: blockdev writev readv size > 128k ...passed 00:16:59.189 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.189 Test: blockdev comparev and writev ...passed 00:16:59.189 Test: blockdev nvme passthru rw ...passed 00:16:59.189 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.189 Test: blockdev nvme admin passthru ...passed 00:16:59.189 Test: blockdev copy ...passed 00:16:59.189 Suite: bdevio tests on: nvme2n1 00:16:59.189 Test: blockdev write read block ...passed 00:16:59.189 Test: blockdev write zeroes read block ...passed 00:16:59.189 Test: blockdev write zeroes read no split ...passed 00:16:59.189 Test: blockdev write zeroes read split ...passed 00:16:59.189 Test: blockdev write zeroes read split partial ...passed 00:16:59.189 Test: blockdev reset ...passed 00:16:59.189 Test: blockdev write read 8 blocks ...passed 00:16:59.189 Test: blockdev write read size > 128k ...passed 00:16:59.189 Test: blockdev write read invalid size ...passed 00:16:59.189 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.189 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.189 Test: blockdev write read max offset ...passed 00:16:59.189 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.189 Test: blockdev writev readv 8 blocks ...passed 00:16:59.189 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.189 Test: blockdev writev readv block ...passed 00:16:59.189 Test: blockdev writev readv size > 128k ...passed 00:16:59.189 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.189 Test: blockdev comparev and writev ...passed 00:16:59.189 Test: blockdev nvme passthru rw ...passed 00:16:59.189 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.189 Test: blockdev nvme admin passthru ...passed 00:16:59.189 Test: blockdev copy ...passed 00:16:59.189 Suite: bdevio tests on: nvme1n1 00:16:59.189 Test: blockdev write read block ...passed 00:16:59.450 Test: blockdev write zeroes read block ...passed 00:16:59.450 Test: blockdev write zeroes read no split ...passed 00:16:59.450 Test: blockdev write zeroes read split ...passed 00:16:59.450 Test: blockdev write zeroes read split partial ...passed 00:16:59.450 Test: blockdev reset ...passed 00:16:59.450 Test: blockdev write read 8 blocks ...passed 00:16:59.450 Test: blockdev write read size > 128k ...passed 00:16:59.450 Test: blockdev write read invalid size ...passed 00:16:59.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.450 Test: blockdev write read max offset ...passed 00:16:59.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.450 Test: blockdev writev readv 8 blocks ...passed 00:16:59.450 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.450 Test: blockdev writev readv block ...passed 00:16:59.450 Test: blockdev writev readv size > 128k ...passed 00:16:59.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.450 Test: blockdev comparev and writev ...passed 00:16:59.450 Test: blockdev nvme passthru rw ...passed 00:16:59.450 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.450 Test: blockdev nvme admin passthru ...passed 00:16:59.450 Test: blockdev copy ...passed 00:16:59.450 Suite: bdevio tests on: nvme0n3 00:16:59.450 Test: blockdev write read block ...passed 00:16:59.450 Test: blockdev write zeroes read block ...passed 00:16:59.450 Test: blockdev write zeroes read no split ...passed 00:16:59.450 Test: blockdev write zeroes read split ...passed 00:16:59.450 Test: blockdev write zeroes read split partial ...passed 00:16:59.450 Test: blockdev reset ...passed 00:16:59.450 Test: blockdev write read 8 blocks ...passed 00:16:59.450 Test: blockdev write read size > 128k ...passed 00:16:59.450 Test: blockdev write read invalid size ...passed 00:16:59.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.450 Test: blockdev write read max offset ...passed 00:16:59.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.450 Test: blockdev writev readv 8 blocks ...passed 00:16:59.450 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.450 Test: blockdev writev readv block ...passed 00:16:59.450 Test: blockdev writev readv size > 128k ...passed 00:16:59.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.450 Test: blockdev comparev and writev ...passed 00:16:59.450 Test: blockdev nvme passthru rw ...passed 00:16:59.450 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.450 Test: blockdev nvme admin passthru ...passed 00:16:59.450 Test: blockdev copy ...passed 00:16:59.450 Suite: bdevio tests on: nvme0n2 00:16:59.450 Test: blockdev write read block ...passed 00:16:59.450 Test: blockdev write zeroes read block ...passed 00:16:59.450 Test: blockdev write zeroes read no split ...passed 00:16:59.450 Test: blockdev write zeroes read split ...passed 00:16:59.450 Test: blockdev write zeroes read split partial ...passed 00:16:59.450 Test: blockdev reset ...passed 00:16:59.450 Test: blockdev write read 8 blocks ...passed 00:16:59.450 Test: blockdev write read size > 128k ...passed 00:16:59.450 Test: blockdev write read invalid size ...passed 00:16:59.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.450 Test: blockdev write read max offset ...passed 00:16:59.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.450 Test: blockdev writev readv 8 blocks ...passed 00:16:59.450 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.450 Test: blockdev writev readv block ...passed 00:16:59.450 Test: blockdev writev readv size > 128k ...passed 00:16:59.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.450 Test: blockdev comparev and writev ...passed 00:16:59.450 Test: blockdev nvme passthru rw ...passed 00:16:59.450 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.450 Test: blockdev nvme admin passthru ...passed 00:16:59.450 Test: blockdev copy ...passed 00:16:59.450 Suite: bdevio tests on: nvme0n1 00:16:59.450 Test: blockdev write read block ...passed 00:16:59.450 Test: blockdev write zeroes read block ...passed 00:16:59.450 Test: blockdev write zeroes read no split ...passed 00:16:59.712 Test: blockdev write zeroes read split ...passed 00:16:59.712 Test: blockdev write zeroes read split partial ...passed 00:16:59.712 Test: blockdev reset ...passed 00:16:59.712 Test: blockdev write read 8 blocks ...passed 00:16:59.712 Test: blockdev write read size > 128k ...passed 00:16:59.712 Test: blockdev write read invalid size ...passed 00:16:59.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:59.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:59.712 Test: blockdev write read max offset ...passed 00:16:59.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:59.712 Test: blockdev writev readv 8 blocks ...passed 00:16:59.712 Test: blockdev writev readv 30 x 1block ...passed 00:16:59.712 Test: blockdev writev readv block ...passed 00:16:59.712 Test: blockdev writev readv size > 128k ...passed 00:16:59.712 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:59.712 Test: blockdev comparev and writev ...passed 00:16:59.712 Test: blockdev nvme passthru rw ...passed 00:16:59.712 Test: blockdev nvme passthru vendor specific ...passed 00:16:59.712 Test: blockdev nvme admin passthru ...passed 00:16:59.712 Test: blockdev copy ...passed 00:16:59.712 00:16:59.712 Run Summary: Type Total Ran Passed Failed Inactive 00:16:59.712 suites 6 6 n/a 0 0 00:16:59.712 tests 138 138 138 0 0 00:16:59.712 asserts 780 780 780 0 n/a 00:16:59.712 00:16:59.712 Elapsed time = 1.292 seconds 00:16:59.712 0 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73718 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73718 ']' 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73718 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73718 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73718' 00:16:59.712 killing process with pid 73718 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73718 00:16:59.712 14:49:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73718 00:17:00.656 14:49:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:00.656 00:17:00.656 real 0m2.413s 00:17:00.656 user 0m5.801s 00:17:00.656 sys 0m0.393s 00:17:00.656 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.656 14:49:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:00.656 ************************************ 00:17:00.656 END TEST bdev_bounds 00:17:00.656 ************************************ 00:17:00.656 14:49:38 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:00.656 14:49:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:00.656 14:49:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.656 14:49:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.656 ************************************ 00:17:00.656 START TEST bdev_nbd 00:17:00.656 ************************************ 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73780 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73780 /var/tmp/spdk-nbd.sock 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73780 ']' 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:00.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:00.656 14:49:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:00.656 [2024-12-09 14:49:38.659499] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:00.656 [2024-12-09 14:49:38.659655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.917 [2024-12-09 14:49:38.820972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.917 [2024-12-09 14:49:38.947524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:01.489 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.749 1+0 records in 00:17:01.749 1+0 records out 00:17:01.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974173 s, 4.2 MB/s 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.749 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:01.750 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.750 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.750 14:49:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:01.750 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:01.750 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:01.750 14:49:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.010 1+0 records in 00:17:02.010 1+0 records out 00:17:02.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000884362 s, 4.6 MB/s 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.010 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.271 1+0 records in 00:17:02.271 1+0 records out 00:17:02.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00093242 s, 4.4 MB/s 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.271 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.532 1+0 records in 00:17:02.532 1+0 records out 00:17:02.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102379 s, 4.0 MB/s 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.532 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.793 1+0 records in 00:17:02.793 1+0 records out 00:17:02.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132928 s, 3.1 MB/s 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.793 14:49:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.054 1+0 records in 00:17:03.054 1+0 records out 00:17:03.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127474 s, 3.2 MB/s 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:03.054 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:03.313 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd0", 00:17:03.313 "bdev_name": "nvme0n1" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd1", 00:17:03.313 "bdev_name": "nvme0n2" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd2", 00:17:03.313 "bdev_name": "nvme0n3" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd3", 00:17:03.313 "bdev_name": "nvme1n1" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd4", 00:17:03.313 "bdev_name": "nvme2n1" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd5", 00:17:03.313 "bdev_name": "nvme3n1" 00:17:03.313 } 00:17:03.313 ]' 00:17:03.313 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:03.313 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd0", 00:17:03.313 "bdev_name": "nvme0n1" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd1", 00:17:03.313 "bdev_name": "nvme0n2" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd2", 00:17:03.313 "bdev_name": "nvme0n3" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd3", 00:17:03.313 "bdev_name": "nvme1n1" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd4", 00:17:03.313 "bdev_name": "nvme2n1" 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "nbd_device": "/dev/nbd5", 00:17:03.313 "bdev_name": "nvme3n1" 00:17:03.313 } 00:17:03.313 ]' 00:17:03.313 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:03.314 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:03.314 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:03.314 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:03.314 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.314 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:03.314 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.314 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.575 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.836 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.097 14:49:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.097 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:04.358 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:04.358 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:04.358 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:04.358 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.359 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.359 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:04.359 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.359 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.359 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.359 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.620 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:04.916 14:49:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:05.175 /dev/nbd0 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.175 1+0 records in 00:17:05.175 1+0 records out 00:17:05.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109123 s, 3.8 MB/s 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.175 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:17:05.434 /dev/nbd1 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.434 1+0 records in 00:17:05.434 1+0 records out 00:17:05.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612263 s, 6.7 MB/s 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.434 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:17:05.693 /dev/nbd10 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.693 1+0 records in 00:17:05.693 1+0 records out 00:17:05.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866377 s, 4.7 MB/s 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.693 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.694 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.694 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.694 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.694 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.694 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.694 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:17:05.952 /dev/nbd11 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.952 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.953 1+0 records in 00:17:05.953 1+0 records out 00:17:05.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111225 s, 3.7 MB/s 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.953 14:49:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:17:05.953 /dev/nbd12 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.213 1+0 records in 00:17:06.213 1+0 records out 00:17:06.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00329847 s, 1.2 MB/s 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:06.213 /dev/nbd13 00:17:06.213 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.475 1+0 records in 00:17:06.475 1+0 records out 00:17:06.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011831 s, 3.5 MB/s 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd0", 00:17:06.475 "bdev_name": "nvme0n1" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd1", 00:17:06.475 "bdev_name": "nvme0n2" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd10", 00:17:06.475 "bdev_name": "nvme0n3" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd11", 00:17:06.475 "bdev_name": "nvme1n1" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd12", 00:17:06.475 "bdev_name": "nvme2n1" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd13", 00:17:06.475 "bdev_name": "nvme3n1" 00:17:06.475 } 00:17:06.475 ]' 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd0", 00:17:06.475 "bdev_name": "nvme0n1" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd1", 00:17:06.475 "bdev_name": "nvme0n2" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd10", 00:17:06.475 "bdev_name": "nvme0n3" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd11", 00:17:06.475 "bdev_name": "nvme1n1" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd12", 00:17:06.475 "bdev_name": "nvme2n1" 00:17:06.475 }, 00:17:06.475 { 00:17:06.475 "nbd_device": "/dev/nbd13", 00:17:06.475 "bdev_name": "nvme3n1" 00:17:06.475 } 00:17:06.475 ]' 00:17:06.475 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:06.737 /dev/nbd1 00:17:06.737 /dev/nbd10 00:17:06.737 /dev/nbd11 00:17:06.737 /dev/nbd12 00:17:06.737 /dev/nbd13' 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:06.737 /dev/nbd1 00:17:06.737 /dev/nbd10 00:17:06.737 /dev/nbd11 00:17:06.737 /dev/nbd12 00:17:06.737 /dev/nbd13' 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:06.737 256+0 records in 00:17:06.737 256+0 records out 00:17:06.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668628 s, 157 MB/s 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:06.737 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:06.999 256+0 records in 00:17:06.999 256+0 records out 00:17:06.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242893 s, 4.3 MB/s 00:17:06.999 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:06.999 14:49:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:06.999 256+0 records in 00:17:06.999 256+0 records out 00:17:06.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245004 s, 4.3 MB/s 00:17:06.999 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:06.999 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:07.260 256+0 records in 00:17:07.260 256+0 records out 00:17:07.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247318 s, 4.2 MB/s 00:17:07.260 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.260 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:07.521 256+0 records in 00:17:07.521 256+0 records out 00:17:07.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250888 s, 4.2 MB/s 00:17:07.521 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.521 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:08.096 256+0 records in 00:17:08.096 256+0 records out 00:17:08.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.316549 s, 3.3 MB/s 00:17:08.096 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:08.096 14:49:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:08.096 256+0 records in 00:17:08.096 256+0 records out 00:17:08.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.199397 s, 5.3 MB/s 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:08.096 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.356 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.618 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.877 14:49:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.137 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:09.397 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:09.658 14:49:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:09.920 malloc_lvol_verify 00:17:09.920 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:10.181 55820120-90b6-465c-ba26-2f30e6f841ec 00:17:10.181 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:10.442 7a4bf0f4-569e-4723-ba37-238f6c7e3568 00:17:10.442 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:10.703 /dev/nbd0 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:10.703 mke2fs 1.47.0 (5-Feb-2023) 00:17:10.703 Discarding device blocks: 0/4096 done 00:17:10.703 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:10.703 00:17:10.703 Allocating group tables: 0/1 done 00:17:10.703 Writing inode tables: 0/1 done 00:17:10.703 Creating journal (1024 blocks): done 00:17:10.703 Writing superblocks and filesystem accounting information: 0/1 done 00:17:10.703 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.703 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73780 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73780 ']' 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73780 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73780 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.964 killing process with pid 73780 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73780' 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73780 00:17:10.964 14:49:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73780 00:17:11.908 14:49:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:11.908 00:17:11.908 real 0m11.258s 00:17:11.909 user 0m14.754s 00:17:11.909 sys 0m4.017s 00:17:11.909 14:49:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.909 14:49:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:11.909 ************************************ 00:17:11.909 END TEST bdev_nbd 00:17:11.909 ************************************ 00:17:11.909 14:49:49 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:11.909 14:49:49 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:17:11.909 14:49:49 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:17:11.909 14:49:49 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:11.909 14:49:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:11.909 14:49:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.909 14:49:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.909 ************************************ 00:17:11.909 START TEST bdev_fio 00:17:11.909 ************************************ 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:11.909 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:11.909 ************************************ 00:17:11.909 START TEST bdev_fio_rw_verify 00:17:11.909 ************************************ 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:11.909 14:49:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.170 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.170 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.170 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.170 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.170 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.170 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.170 fio-3.35 00:17:12.170 Starting 6 threads 00:17:24.469 00:17:24.469 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74206: Mon Dec 9 14:50:02 2024 00:17:24.469 read: IOPS=13.4k, BW=52.5MiB/s (55.0MB/s)(525MiB/10001msec) 00:17:24.469 slat (usec): min=2, max=1459, avg= 7.41, stdev=13.03 00:17:24.469 clat (usec): min=96, max=7851, avg=1430.70, stdev=770.88 00:17:24.469 lat (usec): min=99, max=7866, avg=1438.10, stdev=771.51 00:17:24.469 clat percentiles (usec): 00:17:24.469 | 50.000th=[ 1336], 99.000th=[ 3785], 99.900th=[ 5080], 99.990th=[ 6587], 00:17:24.469 | 99.999th=[ 7832] 00:17:24.469 write: IOPS=13.6k, BW=53.3MiB/s (55.8MB/s)(533MiB/10001msec); 0 zone resets 00:17:24.469 slat (usec): min=13, max=4858, avg=44.32, stdev=147.20 00:17:24.469 clat (usec): min=88, max=9191, avg=1767.04, stdev=843.19 00:17:24.469 lat (usec): min=103, max=9214, avg=1811.36, stdev=855.97 00:17:24.469 clat percentiles (usec): 00:17:24.469 | 50.000th=[ 1647], 99.000th=[ 4359], 99.900th=[ 5735], 99.990th=[ 7701], 00:17:24.469 | 99.999th=[ 9110] 00:17:24.469 bw ( KiB/s): min=48635, max=63076, per=100.00%, avg=54648.21, stdev=853.80, samples=114 00:17:24.469 iops : min=12155, max=15768, avg=13661.05, stdev=213.47, samples=114 00:17:24.469 lat (usec) : 100=0.01%, 250=1.16%, 500=5.02%, 750=7.11%, 1000=10.44% 00:17:24.469 lat (msec) : 2=50.21%, 4=24.85%, 10=1.20% 00:17:24.470 cpu : usr=43.91%, sys=31.85%, ctx=5198, majf=0, minf=13922 00:17:24.470 IO depths : 1=11.1%, 2=23.4%, 4=51.4%, 8=14.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.470 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.470 issued rwts: total=134395,136346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:24.470 00:17:24.470 Run status group 0 (all jobs): 00:17:24.470 READ: bw=52.5MiB/s (55.0MB/s), 52.5MiB/s-52.5MiB/s (55.0MB/s-55.0MB/s), io=525MiB (550MB), run=10001-10001msec 00:17:24.470 WRITE: bw=53.3MiB/s (55.8MB/s), 53.3MiB/s-53.3MiB/s (55.8MB/s-55.8MB/s), io=533MiB (558MB), run=10001-10001msec 00:17:25.412 ----------------------------------------------------- 00:17:25.412 Suppressions used: 00:17:25.412 count bytes template 00:17:25.412 6 48 /usr/src/fio/parse.c 00:17:25.412 1867 179232 /usr/src/fio/iolog.c 00:17:25.412 1 8 libtcmalloc_minimal.so 00:17:25.412 1 904 libcrypto.so 00:17:25.412 ----------------------------------------------------- 00:17:25.412 00:17:25.412 00:17:25.412 real 0m13.245s 00:17:25.412 user 0m28.010s 00:17:25.412 sys 0m19.482s 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.412 ************************************ 00:17:25.412 END TEST bdev_fio_rw_verify 00:17:25.412 ************************************ 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4d7da76b-33c3-4fed-ab21-5317410cbb12"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4d7da76b-33c3-4fed-ab21-5317410cbb12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "624c33c9-0fa9-4aab-bf41-ea688c44cb6a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "624c33c9-0fa9-4aab-bf41-ea688c44cb6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "61c57f11-f087-4c4e-a231-cac0e7ac63b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61c57f11-f087-4c4e-a231-cac0e7ac63b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8bb1a770-5ccf-478a-bb20-93658a0f57bb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8bb1a770-5ccf-478a-bb20-93658a0f57bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a7520b81-da6b-45a0-8b97-62ed6bc7469f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a7520b81-da6b-45a0-8b97-62ed6bc7469f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "fe89d82b-c27b-4d0f-bab4-648cddf0a9cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fe89d82b-c27b-4d0f-bab4-648cddf0a9cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.412 /home/vagrant/spdk_repo/spdk 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:25.412 00:17:25.412 real 0m13.418s 00:17:25.412 user 0m28.086s 00:17:25.412 sys 0m19.559s 00:17:25.412 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.412 ************************************ 00:17:25.412 END TEST bdev_fio 00:17:25.413 ************************************ 00:17:25.413 14:50:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:25.413 14:50:03 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:25.413 14:50:03 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:25.413 14:50:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:25.413 14:50:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.413 14:50:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.413 ************************************ 00:17:25.413 START TEST bdev_verify 00:17:25.413 ************************************ 00:17:25.413 14:50:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:25.413 [2024-12-09 14:50:03.456681] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:25.413 [2024-12-09 14:50:03.456846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74384 ] 00:17:25.673 [2024-12-09 14:50:03.622650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:25.673 [2024-12-09 14:50:03.772239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.673 [2024-12-09 14:50:03.772334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.243 Running I/O for 5 seconds... 00:17:28.577 24641.00 IOPS, 96.25 MiB/s [2024-12-09T14:50:07.646Z] 25280.00 IOPS, 98.75 MiB/s [2024-12-09T14:50:08.586Z] 25067.00 IOPS, 97.92 MiB/s [2024-12-09T14:50:09.530Z] 25032.25 IOPS, 97.78 MiB/s [2024-12-09T14:50:09.530Z] 24627.40 IOPS, 96.20 MiB/s 00:17:31.408 Latency(us) 00:17:31.408 [2024-12-09T14:50:09.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.408 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x0 length 0x80000 00:17:31.408 nvme0n1 : 5.06 1822.38 7.12 0.00 0.00 70124.75 13712.15 62107.96 00:17:31.408 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x80000 length 0x80000 00:17:31.408 nvme0n1 : 5.06 1972.10 7.70 0.00 0.00 64789.14 10637.00 75416.81 00:17:31.408 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x0 length 0x80000 00:17:31.408 nvme0n2 : 5.03 1830.51 7.15 0.00 0.00 69690.82 11342.77 61704.66 00:17:31.408 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x80000 length 0x80000 00:17:31.408 nvme0n2 : 5.03 1958.38 7.65 0.00 0.00 65125.76 16636.06 66947.54 00:17:31.408 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x0 length 0x80000 00:17:31.408 nvme0n3 : 5.06 1821.82 7.12 0.00 0.00 69908.20 9830.40 63317.86 00:17:31.408 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x80000 length 0x80000 00:17:31.408 nvme0n3 : 5.09 1962.41 7.67 0.00 0.00 64872.79 13712.15 59688.17 00:17:31.408 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x0 length 0x20000 00:17:31.408 nvme1n1 : 5.07 1841.68 7.19 0.00 0.00 69029.02 8318.03 65334.35 00:17:31.408 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x20000 length 0x20000 00:17:31.408 nvme1n1 : 5.07 1967.87 7.69 0.00 0.00 64564.84 4789.17 64527.75 00:17:31.408 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x0 length 0xbd0bd 00:17:31.408 nvme2n1 : 5.07 2533.43 9.90 0.00 0.00 50073.22 6654.42 62107.96 00:17:31.408 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:31.408 nvme2n1 : 5.08 2766.78 10.81 0.00 0.00 45699.51 4562.31 54041.99 00:17:31.408 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0x0 length 0xa0000 00:17:31.408 nvme3n1 : 5.08 1864.09 7.28 0.00 0.00 67980.14 7410.61 61704.66 00:17:31.408 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.408 Verification LBA range: start 0xa0000 length 0xa0000 00:17:31.408 nvme3n1 : 5.08 1964.86 7.68 0.00 0.00 64340.08 8015.56 64527.75 00:17:31.408 [2024-12-09T14:50:09.530Z] =================================================================================================================== 00:17:31.408 [2024-12-09T14:50:09.530Z] Total : 24306.32 94.95 0.00 0.00 62773.27 4562.31 75416.81 00:17:32.352 00:17:32.352 real 0m6.815s 00:17:32.352 user 0m10.757s 00:17:32.352 sys 0m1.671s 00:17:32.352 14:50:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.352 ************************************ 00:17:32.352 END TEST bdev_verify 00:17:32.352 ************************************ 00:17:32.352 14:50:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:32.352 14:50:10 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:32.352 14:50:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:32.352 14:50:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.352 14:50:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.352 ************************************ 00:17:32.352 START TEST bdev_verify_big_io 00:17:32.352 ************************************ 00:17:32.352 14:50:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:32.352 [2024-12-09 14:50:10.342476] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:32.352 [2024-12-09 14:50:10.342650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74483 ] 00:17:32.613 [2024-12-09 14:50:10.518137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:32.613 [2024-12-09 14:50:10.643108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.613 [2024-12-09 14:50:10.643205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.188 Running I/O for 5 seconds... 00:17:39.315 2707.00 IOPS, 169.19 MiB/s [2024-12-09T14:50:17.699Z] 3653.50 IOPS, 228.34 MiB/s 00:17:39.577 Latency(us) 00:17:39.577 [2024-12-09T14:50:17.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.577 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.577 Verification LBA range: start 0x0 length 0x8000 00:17:39.577 nvme0n1 : 5.95 118.36 7.40 0.00 0.00 1015685.41 52428.80 1619646.62 00:17:39.577 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.577 Verification LBA range: start 0x8000 length 0x8000 00:17:39.577 nvme0n1 : 5.93 109.27 6.83 0.00 0.00 1108281.48 6503.19 1490591.11 00:17:39.578 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x0 length 0x8000 00:17:39.578 nvme0n2 : 6.08 94.81 5.93 0.00 0.00 1239138.02 120989.54 2245565.83 00:17:39.578 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x8000 length 0x8000 00:17:39.578 nvme0n2 : 5.93 107.88 6.74 0.00 0.00 1084686.34 261337.40 1297007.85 00:17:39.578 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x0 length 0x8000 00:17:39.578 nvme0n3 : 6.08 115.81 7.24 0.00 0.00 961276.20 121796.14 1793871.56 00:17:39.578 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x8000 length 0x8000 00:17:39.578 nvme0n3 : 5.95 123.60 7.73 0.00 0.00 960817.49 186323.89 1432516.14 00:17:39.578 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x0 length 0x2000 00:17:39.578 nvme1n1 : 6.17 129.72 8.11 0.00 0.00 821362.69 83079.48 1619646.62 00:17:39.578 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x2000 length 0x2000 00:17:39.578 nvme1n1 : 5.96 107.44 6.71 0.00 0.00 1058034.00 3629.69 1322818.95 00:17:39.578 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x0 length 0xbd0b 00:17:39.578 nvme2n1 : 6.23 220.82 13.80 0.00 0.00 459788.36 2747.47 1084066.26 00:17:39.578 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:39.578 nvme2n1 : 5.97 161.37 10.09 0.00 0.00 705189.06 7965.14 780785.82 00:17:39.578 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0x0 length 0xa000 00:17:39.578 nvme3n1 : 6.46 233.92 14.62 0.00 0.00 416185.24 806.60 2232660.28 00:17:39.578 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.578 Verification LBA range: start 0xa000 length 0xa000 00:17:39.578 nvme3n1 : 5.97 136.70 8.54 0.00 0.00 815257.54 3654.89 1600288.30 00:17:39.578 [2024-12-09T14:50:17.700Z] =================================================================================================================== 00:17:39.578 [2024-12-09T14:50:17.700Z] Total : 1659.70 103.73 0.00 0.00 807273.53 806.60 2245565.83 00:17:40.965 00:17:40.965 real 0m8.489s 00:17:40.965 user 0m15.453s 00:17:40.965 sys 0m0.539s 00:17:40.965 14:50:18 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.965 ************************************ 00:17:40.965 END TEST bdev_verify_big_io 00:17:40.965 ************************************ 00:17:40.965 14:50:18 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.965 14:50:18 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:40.965 14:50:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:40.965 14:50:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.965 14:50:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.965 ************************************ 00:17:40.965 START TEST bdev_write_zeroes 00:17:40.965 ************************************ 00:17:40.965 14:50:18 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:40.965 [2024-12-09 14:50:18.893544] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:40.965 [2024-12-09 14:50:18.893686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74601 ] 00:17:40.965 [2024-12-09 14:50:19.058314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.227 [2024-12-09 14:50:19.193213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.799 Running I/O for 1 seconds... 00:17:42.743 75520.00 IOPS, 295.00 MiB/s 00:17:42.743 Latency(us) 00:17:42.743 [2024-12-09T14:50:20.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.743 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.743 nvme0n1 : 1.02 12481.19 48.75 0.00 0.00 10244.04 8015.56 18148.43 00:17:42.743 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.743 nvme0n2 : 1.02 12466.53 48.70 0.00 0.00 10246.15 8065.97 18047.61 00:17:42.743 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.743 nvme0n3 : 1.02 12451.75 48.64 0.00 0.00 10248.18 8116.38 18551.73 00:17:42.743 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.743 nvme1n1 : 1.02 12437.75 48.58 0.00 0.00 10250.78 8116.38 19055.85 00:17:42.743 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.743 nvme2n1 : 1.03 12730.12 49.73 0.00 0.00 10003.70 5721.80 17140.18 00:17:42.743 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.743 nvme3n1 : 1.02 12421.76 48.52 0.00 0.00 10242.40 8065.97 19358.33 00:17:42.743 [2024-12-09T14:50:20.865Z] =================================================================================================================== 00:17:42.743 [2024-12-09T14:50:20.865Z] Total : 74989.10 292.93 0.00 0.00 10204.86 5721.80 19358.33 00:17:43.688 00:17:43.688 real 0m2.764s 00:17:43.688 user 0m2.048s 00:17:43.688 sys 0m0.518s 00:17:43.688 ************************************ 00:17:43.688 END TEST bdev_write_zeroes 00:17:43.688 ************************************ 00:17:43.688 14:50:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.688 14:50:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:43.688 14:50:21 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.688 14:50:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:43.688 14:50:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.688 14:50:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:43.688 ************************************ 00:17:43.688 START TEST bdev_json_nonenclosed 00:17:43.688 ************************************ 00:17:43.688 14:50:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.688 [2024-12-09 14:50:21.727638] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:43.688 [2024-12-09 14:50:21.727782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74654 ] 00:17:43.949 [2024-12-09 14:50:21.891202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.949 [2024-12-09 14:50:22.028209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.949 [2024-12-09 14:50:22.028334] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:43.949 [2024-12-09 14:50:22.028356] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:43.949 [2024-12-09 14:50:22.028369] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:44.211 00:17:44.211 real 0m0.577s 00:17:44.211 user 0m0.359s 00:17:44.211 sys 0m0.112s 00:17:44.211 14:50:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.211 ************************************ 00:17:44.211 END TEST bdev_json_nonenclosed 00:17:44.211 ************************************ 00:17:44.211 14:50:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:44.211 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:44.211 14:50:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:44.211 14:50:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.211 14:50:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:44.211 ************************************ 00:17:44.211 START TEST bdev_json_nonarray 00:17:44.211 ************************************ 00:17:44.211 14:50:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:44.472 [2024-12-09 14:50:22.373474] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:44.472 [2024-12-09 14:50:22.373623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74674 ] 00:17:44.472 [2024-12-09 14:50:22.538780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.732 [2024-12-09 14:50:22.675194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.732 [2024-12-09 14:50:22.675334] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:44.732 [2024-12-09 14:50:22.675357] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:44.732 [2024-12-09 14:50:22.675369] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:44.994 00:17:44.994 real 0m0.583s 00:17:44.994 user 0m0.351s 00:17:44.994 sys 0m0.125s 00:17:44.994 14:50:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.994 14:50:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:44.994 ************************************ 00:17:44.994 END TEST bdev_json_nonarray 00:17:44.994 ************************************ 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:44.994 14:50:22 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:50.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:50.863 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:50.863 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:50.863 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:51.125 00:17:51.125 real 0m58.843s 00:17:51.125 user 1m23.937s 00:17:51.125 sys 0m38.903s 00:17:51.125 14:50:29 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.125 14:50:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:51.125 ************************************ 00:17:51.125 END TEST blockdev_xnvme 00:17:51.125 ************************************ 00:17:51.125 14:50:29 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:51.125 14:50:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:51.125 14:50:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.125 14:50:29 -- common/autotest_common.sh@10 -- # set +x 00:17:51.125 ************************************ 00:17:51.125 START TEST ublk 00:17:51.125 ************************************ 00:17:51.125 14:50:29 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:51.125 * Looking for test storage... 00:17:51.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:51.125 14:50:29 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.125 14:50:29 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.125 14:50:29 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.125 14:50:29 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.126 14:50:29 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.126 14:50:29 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.126 14:50:29 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.126 14:50:29 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.126 14:50:29 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.126 14:50:29 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.126 14:50:29 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.126 14:50:29 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.126 14:50:29 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.126 14:50:29 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.126 14:50:29 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.126 14:50:29 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:51.126 14:50:29 ublk -- scripts/common.sh@345 -- # : 1 00:17:51.126 14:50:29 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.126 14:50:29 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.126 14:50:29 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:51.126 14:50:29 ublk -- scripts/common.sh@353 -- # local d=1 00:17:51.126 14:50:29 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.126 14:50:29 ublk -- scripts/common.sh@355 -- # echo 1 00:17:51.126 14:50:29 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.388 14:50:29 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:51.388 14:50:29 ublk -- scripts/common.sh@353 -- # local d=2 00:17:51.388 14:50:29 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.388 14:50:29 ublk -- scripts/common.sh@355 -- # echo 2 00:17:51.388 14:50:29 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.388 14:50:29 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.388 14:50:29 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.388 14:50:29 ublk -- scripts/common.sh@368 -- # return 0 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.388 --rc genhtml_branch_coverage=1 00:17:51.388 --rc genhtml_function_coverage=1 00:17:51.388 --rc genhtml_legend=1 00:17:51.388 --rc geninfo_all_blocks=1 00:17:51.388 --rc geninfo_unexecuted_blocks=1 00:17:51.388 00:17:51.388 ' 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.388 --rc genhtml_branch_coverage=1 00:17:51.388 --rc genhtml_function_coverage=1 00:17:51.388 --rc genhtml_legend=1 00:17:51.388 --rc geninfo_all_blocks=1 00:17:51.388 --rc geninfo_unexecuted_blocks=1 00:17:51.388 00:17:51.388 ' 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.388 --rc genhtml_branch_coverage=1 00:17:51.388 --rc genhtml_function_coverage=1 00:17:51.388 --rc genhtml_legend=1 00:17:51.388 --rc geninfo_all_blocks=1 00:17:51.388 --rc geninfo_unexecuted_blocks=1 00:17:51.388 00:17:51.388 ' 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.388 --rc genhtml_branch_coverage=1 00:17:51.388 --rc genhtml_function_coverage=1 00:17:51.388 --rc genhtml_legend=1 00:17:51.388 --rc geninfo_all_blocks=1 00:17:51.388 --rc geninfo_unexecuted_blocks=1 00:17:51.388 00:17:51.388 ' 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:51.388 14:50:29 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:51.388 14:50:29 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:51.388 14:50:29 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:51.388 14:50:29 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:51.388 14:50:29 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:51.388 14:50:29 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:51.388 14:50:29 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:51.388 14:50:29 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:51.388 14:50:29 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.388 14:50:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 ************************************ 00:17:51.388 START TEST test_save_ublk_config 00:17:51.388 ************************************ 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=74977 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 74977 00:17:51.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 74977 ']' 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.388 14:50:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 [2024-12-09 14:50:29.378192] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:51.388 [2024-12-09 14:50:29.378335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74977 ] 00:17:51.652 [2024-12-09 14:50:29.543066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.652 [2024-12-09 14:50:29.683203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:52.596 [2024-12-09 14:50:30.507835] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:52.596 [2024-12-09 14:50:30.508797] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:52.596 malloc0 00:17:52.596 [2024-12-09 14:50:30.587982] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:52.596 [2024-12-09 14:50:30.588093] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:52.596 [2024-12-09 14:50:30.588105] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:52.596 [2024-12-09 14:50:30.588113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:52.596 [2024-12-09 14:50:30.595865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:52.596 [2024-12-09 14:50:30.595896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:52.596 [2024-12-09 14:50:30.603851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:52.596 [2024-12-09 14:50:30.603986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:52.596 [2024-12-09 14:50:30.620845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:52.596 0 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.596 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:52.858 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.858 14:50:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:52.858 "subsystems": [ 00:17:52.858 { 00:17:52.858 "subsystem": "fsdev", 00:17:52.858 "config": [ 00:17:52.858 { 00:17:52.858 "method": "fsdev_set_opts", 00:17:52.858 "params": { 00:17:52.858 "fsdev_io_pool_size": 65535, 00:17:52.858 "fsdev_io_cache_size": 256 00:17:52.858 } 00:17:52.858 } 00:17:52.858 ] 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "subsystem": "keyring", 00:17:52.858 "config": [] 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "subsystem": "iobuf", 00:17:52.858 "config": [ 00:17:52.858 { 00:17:52.858 "method": "iobuf_set_options", 00:17:52.858 "params": { 00:17:52.858 "small_pool_count": 8192, 00:17:52.858 "large_pool_count": 1024, 00:17:52.858 "small_bufsize": 8192, 00:17:52.858 "large_bufsize": 135168, 00:17:52.858 "enable_numa": false 00:17:52.858 } 00:17:52.858 } 00:17:52.858 ] 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "subsystem": "sock", 00:17:52.858 "config": [ 00:17:52.858 { 00:17:52.858 "method": "sock_set_default_impl", 00:17:52.858 "params": { 00:17:52.858 "impl_name": "posix" 00:17:52.858 } 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "method": "sock_impl_set_options", 00:17:52.858 "params": { 00:17:52.858 "impl_name": "ssl", 00:17:52.858 "recv_buf_size": 4096, 00:17:52.858 "send_buf_size": 4096, 00:17:52.858 "enable_recv_pipe": true, 00:17:52.858 "enable_quickack": false, 00:17:52.858 "enable_placement_id": 0, 00:17:52.858 "enable_zerocopy_send_server": true, 00:17:52.858 "enable_zerocopy_send_client": false, 00:17:52.858 "zerocopy_threshold": 0, 00:17:52.858 "tls_version": 0, 00:17:52.858 "enable_ktls": false 00:17:52.858 } 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "method": "sock_impl_set_options", 00:17:52.858 "params": { 00:17:52.858 "impl_name": "posix", 00:17:52.858 "recv_buf_size": 2097152, 00:17:52.858 "send_buf_size": 2097152, 00:17:52.858 "enable_recv_pipe": true, 00:17:52.858 "enable_quickack": false, 00:17:52.858 "enable_placement_id": 0, 00:17:52.858 "enable_zerocopy_send_server": true, 00:17:52.858 "enable_zerocopy_send_client": false, 00:17:52.858 "zerocopy_threshold": 0, 00:17:52.858 "tls_version": 0, 00:17:52.858 "enable_ktls": false 00:17:52.858 } 00:17:52.858 } 00:17:52.858 ] 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "subsystem": "vmd", 00:17:52.858 "config": [] 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "subsystem": "accel", 00:17:52.858 "config": [ 00:17:52.858 { 00:17:52.858 "method": "accel_set_options", 00:17:52.858 "params": { 00:17:52.858 "small_cache_size": 128, 00:17:52.858 "large_cache_size": 16, 00:17:52.858 "task_count": 2048, 00:17:52.858 "sequence_count": 2048, 00:17:52.858 "buf_count": 2048 00:17:52.858 } 00:17:52.858 } 00:17:52.858 ] 00:17:52.858 }, 00:17:52.858 { 00:17:52.858 "subsystem": "bdev", 00:17:52.858 "config": [ 00:17:52.858 { 00:17:52.858 "method": "bdev_set_options", 00:17:52.858 "params": { 00:17:52.858 "bdev_io_pool_size": 65535, 00:17:52.858 "bdev_io_cache_size": 256, 00:17:52.858 "bdev_auto_examine": true, 00:17:52.858 "iobuf_small_cache_size": 128, 00:17:52.859 "iobuf_large_cache_size": 16 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "bdev_raid_set_options", 00:17:52.859 "params": { 00:17:52.859 "process_window_size_kb": 1024, 00:17:52.859 "process_max_bandwidth_mb_sec": 0 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "bdev_iscsi_set_options", 00:17:52.859 "params": { 00:17:52.859 "timeout_sec": 30 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "bdev_nvme_set_options", 00:17:52.859 "params": { 00:17:52.859 "action_on_timeout": "none", 00:17:52.859 "timeout_us": 0, 00:17:52.859 "timeout_admin_us": 0, 00:17:52.859 "keep_alive_timeout_ms": 10000, 00:17:52.859 "arbitration_burst": 0, 00:17:52.859 "low_priority_weight": 0, 00:17:52.859 "medium_priority_weight": 0, 00:17:52.859 "high_priority_weight": 0, 00:17:52.859 "nvme_adminq_poll_period_us": 10000, 00:17:52.859 "nvme_ioq_poll_period_us": 0, 00:17:52.859 "io_queue_requests": 0, 00:17:52.859 "delay_cmd_submit": true, 00:17:52.859 "transport_retry_count": 4, 00:17:52.859 "bdev_retry_count": 3, 00:17:52.859 "transport_ack_timeout": 0, 00:17:52.859 "ctrlr_loss_timeout_sec": 0, 00:17:52.859 "reconnect_delay_sec": 0, 00:17:52.859 "fast_io_fail_timeout_sec": 0, 00:17:52.859 "disable_auto_failback": false, 00:17:52.859 "generate_uuids": false, 00:17:52.859 "transport_tos": 0, 00:17:52.859 "nvme_error_stat": false, 00:17:52.859 "rdma_srq_size": 0, 00:17:52.859 "io_path_stat": false, 00:17:52.859 "allow_accel_sequence": false, 00:17:52.859 "rdma_max_cq_size": 0, 00:17:52.859 "rdma_cm_event_timeout_ms": 0, 00:17:52.859 "dhchap_digests": [ 00:17:52.859 "sha256", 00:17:52.859 "sha384", 00:17:52.859 "sha512" 00:17:52.859 ], 00:17:52.859 "dhchap_dhgroups": [ 00:17:52.859 "null", 00:17:52.859 "ffdhe2048", 00:17:52.859 "ffdhe3072", 00:17:52.859 "ffdhe4096", 00:17:52.859 "ffdhe6144", 00:17:52.859 "ffdhe8192" 00:17:52.859 ] 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "bdev_nvme_set_hotplug", 00:17:52.859 "params": { 00:17:52.859 "period_us": 100000, 00:17:52.859 "enable": false 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "bdev_malloc_create", 00:17:52.859 "params": { 00:17:52.859 "name": "malloc0", 00:17:52.859 "num_blocks": 8192, 00:17:52.859 "block_size": 4096, 00:17:52.859 "physical_block_size": 4096, 00:17:52.859 "uuid": "6f323cc1-8d0e-42f5-9476-829c60b9082b", 00:17:52.859 "optimal_io_boundary": 0, 00:17:52.859 "md_size": 0, 00:17:52.859 "dif_type": 0, 00:17:52.859 "dif_is_head_of_md": false, 00:17:52.859 "dif_pi_format": 0 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "bdev_wait_for_examine" 00:17:52.859 } 00:17:52.859 ] 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "scsi", 00:17:52.859 "config": null 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "scheduler", 00:17:52.859 "config": [ 00:17:52.859 { 00:17:52.859 "method": "framework_set_scheduler", 00:17:52.859 "params": { 00:17:52.859 "name": "static" 00:17:52.859 } 00:17:52.859 } 00:17:52.859 ] 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "vhost_scsi", 00:17:52.859 "config": [] 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "vhost_blk", 00:17:52.859 "config": [] 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "ublk", 00:17:52.859 "config": [ 00:17:52.859 { 00:17:52.859 "method": "ublk_create_target", 00:17:52.859 "params": { 00:17:52.859 "cpumask": "1" 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "ublk_start_disk", 00:17:52.859 "params": { 00:17:52.859 "bdev_name": "malloc0", 00:17:52.859 "ublk_id": 0, 00:17:52.859 "num_queues": 1, 00:17:52.859 "queue_depth": 128 00:17:52.859 } 00:17:52.859 } 00:17:52.859 ] 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "nbd", 00:17:52.859 "config": [] 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "nvmf", 00:17:52.859 "config": [ 00:17:52.859 { 00:17:52.859 "method": "nvmf_set_config", 00:17:52.859 "params": { 00:17:52.859 "discovery_filter": "match_any", 00:17:52.859 "admin_cmd_passthru": { 00:17:52.859 "identify_ctrlr": false 00:17:52.859 }, 00:17:52.859 "dhchap_digests": [ 00:17:52.859 "sha256", 00:17:52.859 "sha384", 00:17:52.859 "sha512" 00:17:52.859 ], 00:17:52.859 "dhchap_dhgroups": [ 00:17:52.859 "null", 00:17:52.859 "ffdhe2048", 00:17:52.859 "ffdhe3072", 00:17:52.859 "ffdhe4096", 00:17:52.859 "ffdhe6144", 00:17:52.859 "ffdhe8192" 00:17:52.859 ] 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "nvmf_set_max_subsystems", 00:17:52.859 "params": { 00:17:52.859 "max_subsystems": 1024 00:17:52.859 } 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "method": "nvmf_set_crdt", 00:17:52.859 "params": { 00:17:52.859 "crdt1": 0, 00:17:52.859 "crdt2": 0, 00:17:52.859 "crdt3": 0 00:17:52.859 } 00:17:52.859 } 00:17:52.859 ] 00:17:52.859 }, 00:17:52.859 { 00:17:52.859 "subsystem": "iscsi", 00:17:52.859 "config": [ 00:17:52.859 { 00:17:52.859 "method": "iscsi_set_options", 00:17:52.859 "params": { 00:17:52.859 "node_base": "iqn.2016-06.io.spdk", 00:17:52.859 "max_sessions": 128, 00:17:52.859 "max_connections_per_session": 2, 00:17:52.859 "max_queue_depth": 64, 00:17:52.859 "default_time2wait": 2, 00:17:52.859 "default_time2retain": 20, 00:17:52.859 "first_burst_length": 8192, 00:17:52.859 "immediate_data": true, 00:17:52.859 "allow_duplicated_isid": false, 00:17:52.859 "error_recovery_level": 0, 00:17:52.859 "nop_timeout": 60, 00:17:52.859 "nop_in_interval": 30, 00:17:52.859 "disable_chap": false, 00:17:52.859 "require_chap": false, 00:17:52.859 "mutual_chap": false, 00:17:52.859 "chap_group": 0, 00:17:52.859 "max_large_datain_per_connection": 64, 00:17:52.859 "max_r2t_per_connection": 4, 00:17:52.859 "pdu_pool_size": 36864, 00:17:52.859 "immediate_data_pool_size": 16384, 00:17:52.859 "data_out_pool_size": 2048 00:17:52.859 } 00:17:52.859 } 00:17:52.859 ] 00:17:52.859 } 00:17:52.859 ] 00:17:52.859 }' 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 74977 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 74977 ']' 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 74977 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74977 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.859 killing process with pid 74977 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74977' 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 74977 00:17:52.859 14:50:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 74977 00:17:54.310 [2024-12-09 14:50:32.089964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:54.310 [2024-12-09 14:50:32.127838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:54.310 [2024-12-09 14:50:32.127948] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:54.310 [2024-12-09 14:50:32.139823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:54.310 [2024-12-09 14:50:32.139868] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:54.310 [2024-12-09 14:50:32.139879] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:54.310 [2024-12-09 14:50:32.139909] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:54.310 [2024-12-09 14:50:32.140034] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:55.717 14:50:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:55.717 14:50:33 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75037 00:17:55.717 14:50:33 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75037 00:17:55.717 14:50:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75037 ']' 00:17:55.717 14:50:33 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.718 14:50:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.718 14:50:33 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.718 14:50:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:55.718 "subsystems": [ 00:17:55.718 { 00:17:55.718 "subsystem": "fsdev", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "fsdev_set_opts", 00:17:55.718 "params": { 00:17:55.718 "fsdev_io_pool_size": 65535, 00:17:55.718 "fsdev_io_cache_size": 256 00:17:55.718 } 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "keyring", 00:17:55.718 "config": [] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "iobuf", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "iobuf_set_options", 00:17:55.718 "params": { 00:17:55.718 "small_pool_count": 8192, 00:17:55.718 "large_pool_count": 1024, 00:17:55.718 "small_bufsize": 8192, 00:17:55.718 "large_bufsize": 135168, 00:17:55.718 "enable_numa": false 00:17:55.718 } 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "sock", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "sock_set_default_impl", 00:17:55.718 "params": { 00:17:55.718 "impl_name": "posix" 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "sock_impl_set_options", 00:17:55.718 "params": { 00:17:55.718 "impl_name": "ssl", 00:17:55.718 "recv_buf_size": 4096, 00:17:55.718 "send_buf_size": 4096, 00:17:55.718 "enable_recv_pipe": true, 00:17:55.718 "enable_quickack": false, 00:17:55.718 "enable_placement_id": 0, 00:17:55.718 "enable_zerocopy_send_server": true, 00:17:55.718 "enable_zerocopy_send_client": false, 00:17:55.718 "zerocopy_threshold": 0, 00:17:55.718 "tls_version": 0, 00:17:55.718 "enable_ktls": false 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "sock_impl_set_options", 00:17:55.718 "params": { 00:17:55.718 "impl_name": "posix", 00:17:55.718 "recv_buf_size": 2097152, 00:17:55.718 "send_buf_size": 2097152, 00:17:55.718 "enable_recv_pipe": true, 00:17:55.718 "enable_quickack": false, 00:17:55.718 "enable_placement_id": 0, 00:17:55.718 "enable_zerocopy_send_server": true, 00:17:55.718 "enable_zerocopy_send_client": false, 00:17:55.718 "zerocopy_threshold": 0, 00:17:55.718 "tls_version": 0, 00:17:55.718 "enable_ktls": false 00:17:55.718 } 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "vmd", 00:17:55.718 "config": [] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "accel", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "accel_set_options", 00:17:55.718 "params": { 00:17:55.718 "small_cache_size": 128, 00:17:55.718 "large_cache_size": 16, 00:17:55.718 "task_count": 2048, 00:17:55.718 "sequence_count": 2048, 00:17:55.718 "buf_count": 2048 00:17:55.718 } 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "bdev", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "bdev_set_options", 00:17:55.718 "params": { 00:17:55.718 "bdev_io_pool_size": 65535, 00:17:55.718 "bdev_io_cache_size": 256, 00:17:55.718 "bdev_auto_examine": true, 00:17:55.718 "iobuf_small_cache_size": 128, 00:17:55.718 "iobuf_large_cache_size": 16 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "bdev_raid_set_options", 00:17:55.718 "params": { 00:17:55.718 "process_window_size_kb": 1024, 00:17:55.718 "process_max_bandwidth_mb_sec": 0 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "bdev_iscsi_set_options", 00:17:55.718 "params": { 00:17:55.718 "timeout_sec": 30 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "bdev_nvme_set_options", 00:17:55.718 "params": { 00:17:55.718 "action_on_timeout": "none", 00:17:55.718 "timeout_us": 0, 00:17:55.718 "timeout_admin_us": 0, 00:17:55.718 "keep_alive_timeout_ms": 10000, 00:17:55.718 "arbitration_burst": 0, 00:17:55.718 "low_priority_weight": 0, 00:17:55.718 "medium_priority_weight": 0, 00:17:55.718 "high_priority_weight": 0, 00:17:55.718 "nvme_adminq_poll_period_us": 10000, 00:17:55.718 "nvme_ioq_poll_period_us": 0, 00:17:55.718 "io_queue_requests": 0, 00:17:55.718 "delay_cmd_submit": true, 00:17:55.718 "transport_retry_count": 4, 00:17:55.718 "bdev_retry_count": 3, 00:17:55.718 "transport_ack_timeout": 0, 00:17:55.718 "ctrlr_loss_timeout_sec": 0, 00:17:55.718 "reconnect_delay_sec": 0, 00:17:55.718 "fast_io_fail_timeout_sec": 0, 00:17:55.718 "disable_auto_failback": false, 00:17:55.718 "generate_uuids": false, 00:17:55.718 "transport_tos": 0, 00:17:55.718 "nvme_error_stat": false, 00:17:55.718 "rdma_srq_size": 0, 00:17:55.718 "io_path_stat": false, 00:17:55.718 "allow_accel_sequence": false, 00:17:55.718 "rdma_max_cq_size": 0, 00:17:55.718 "rdma_cm_event_timeout_ms": 0, 00:17:55.718 "dhchap_digests": [ 00:17:55.718 "sha256", 00:17:55.718 "sha384", 00:17:55.718 "sha512" 00:17:55.718 ], 00:17:55.718 "dhchap_dhgroups": [ 00:17:55.718 "null", 00:17:55.718 "ffdhe2048", 00:17:55.718 "ffdhe3072", 00:17:55.718 "ffdhe4096", 00:17:55.718 "ffdhe6144", 00:17:55.718 "ffdhe8192" 00:17:55.718 ] 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "bdev_nvme_set_hotplug", 00:17:55.718 "params": { 00:17:55.718 "period_us": 100000, 00:17:55.718 "enable": false 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "bdev_malloc_create", 00:17:55.718 "params": { 00:17:55.718 "name": "malloc0", 00:17:55.718 "num_blocks": 8192, 00:17:55.718 "block_size": 4096, 00:17:55.718 "physical_block_size": 4096, 00:17:55.718 "uuid": "6f323cc1-8d0e-42f5-9476-829c60b9082b", 00:17:55.718 "optimal_io_boundary": 0, 00:17:55.718 "md_size": 0, 00:17:55.718 "dif_type": 0, 00:17:55.718 "dif_is_head_of_md": false, 00:17:55.718 "dif_pi_format": 0 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "bdev_wait_for_examine" 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "scsi", 00:17:55.718 "config": null 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "scheduler", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "framework_set_scheduler", 00:17:55.718 "params": { 00:17:55.718 "name": "static" 00:17:55.718 } 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "vhost_scsi", 00:17:55.718 "config": [] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "vhost_blk", 00:17:55.718 "config": [] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "ublk", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "ublk_create_target", 00:17:55.718 "params": { 00:17:55.718 "cpumask": "1" 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "ublk_start_disk", 00:17:55.718 "params": { 00:17:55.718 "bdev_name": "malloc0", 00:17:55.718 "ublk_id": 0, 00:17:55.718 "num_queues": 1, 00:17:55.718 "queue_depth": 128 00:17:55.718 } 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "nbd", 00:17:55.718 "config": [] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "nvmf", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "nvmf_set_config", 00:17:55.718 "params": { 00:17:55.718 "discovery_filter": "match_any", 00:17:55.718 "admin_cmd_passthru": { 00:17:55.718 "identify_ctrlr": false 00:17:55.718 }, 00:17:55.718 "dhchap_digests": [ 00:17:55.718 "sha256", 00:17:55.718 "sha384", 00:17:55.718 "sha512" 00:17:55.718 ], 00:17:55.718 "dhchap_dhgroups": [ 00:17:55.718 "null", 00:17:55.718 "ffdhe2048", 00:17:55.718 "ffdhe3072", 00:17:55.718 "ffdhe4096", 00:17:55.718 "ffdhe6144", 00:17:55.718 "ffdhe8192" 00:17:55.718 ] 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "nvmf_set_max_subsystems", 00:17:55.718 "params": { 00:17:55.718 "max_subsystems": 1024 00:17:55.718 } 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "method": "nvmf_set_crdt", 00:17:55.718 "params": { 00:17:55.718 "crdt1": 0, 00:17:55.718 "crdt2": 0, 00:17:55.718 "crdt3": 0 00:17:55.718 } 00:17:55.718 } 00:17:55.718 ] 00:17:55.718 }, 00:17:55.718 { 00:17:55.718 "subsystem": "iscsi", 00:17:55.718 "config": [ 00:17:55.718 { 00:17:55.718 "method": "iscsi_set_options", 00:17:55.718 "params": { 00:17:55.718 "node_base": "iqn.2016-06.io.spdk", 00:17:55.718 "max_sessions": 128, 00:17:55.718 "max_connections_per_session": 2, 00:17:55.718 "max_queue_depth": 64, 00:17:55.718 "default_time2wait": 2, 00:17:55.718 "default_time2retain": 20, 00:17:55.718 "first_burst_length": 8192, 00:17:55.718 "immediate_data": true, 00:17:55.718 "allow_duplicated_isid": false, 00:17:55.719 "error_recovery_level": 0, 00:17:55.719 "nop_timeout": 60, 00:17:55.719 "nop_in_interval": 30, 00:17:55.719 "disable_chap": false, 00:17:55.719 "require_chap": false, 00:17:55.719 "mutual_chap": false, 00:17:55.719 "chap_group": 0, 00:17:55.719 "max_large_datain_per_connection": 64, 00:17:55.719 "max_r2t_per_connection": 4, 00:17:55.719 "pdu_pool_size": 36864, 00:17:55.719 "immediate_data_pool_size": 16384, 00:17:55.719 "data_out_pool_size": 2048 00:17:55.719 } 00:17:55.719 } 00:17:55.719 ] 00:17:55.719 } 00:17:55.719 ] 00:17:55.719 }' 00:17:55.719 14:50:33 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.719 14:50:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:55.719 [2024-12-09 14:50:33.647247] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:55.719 [2024-12-09 14:50:33.647368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75037 ] 00:17:55.719 [2024-12-09 14:50:33.801502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.980 [2024-12-09 14:50:33.889669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.553 [2024-12-09 14:50:34.590820] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:56.553 [2024-12-09 14:50:34.591494] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:56.553 [2024-12-09 14:50:34.598919] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:56.553 [2024-12-09 14:50:34.598983] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:56.553 [2024-12-09 14:50:34.598991] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:56.553 [2024-12-09 14:50:34.598997] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:56.553 [2024-12-09 14:50:34.607891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:56.553 [2024-12-09 14:50:34.607910] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:56.553 [2024-12-09 14:50:34.614827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:56.553 [2024-12-09 14:50:34.614906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:56.553 [2024-12-09 14:50:34.631819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:56.553 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.553 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:56.553 14:50:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:56.553 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.553 14:50:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:56.553 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75037 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75037 ']' 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75037 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75037 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.815 killing process with pid 75037 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75037' 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75037 00:17:56.815 14:50:34 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75037 00:17:57.760 [2024-12-09 14:50:35.750622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:57.760 [2024-12-09 14:50:35.780900] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:57.760 [2024-12-09 14:50:35.780998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:57.760 [2024-12-09 14:50:35.787828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:57.760 [2024-12-09 14:50:35.787870] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:57.760 [2024-12-09 14:50:35.787877] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:57.760 [2024-12-09 14:50:35.787901] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:57.760 [2024-12-09 14:50:35.788023] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:59.146 14:50:37 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:59.146 00:17:59.146 real 0m7.929s 00:17:59.146 user 0m5.168s 00:17:59.146 sys 0m3.426s 00:17:59.146 14:50:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.146 ************************************ 00:17:59.146 END TEST test_save_ublk_config 00:17:59.146 ************************************ 00:17:59.146 14:50:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:59.146 14:50:37 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75110 00:17:59.146 14:50:37 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.146 14:50:37 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75110 00:17:59.146 14:50:37 ublk -- common/autotest_common.sh@835 -- # '[' -z 75110 ']' 00:17:59.146 14:50:37 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.146 14:50:37 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:59.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.146 14:50:37 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.146 14:50:37 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.146 14:50:37 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.146 14:50:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:59.408 [2024-12-09 14:50:37.335756] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:17:59.408 [2024-12-09 14:50:37.335888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75110 ] 00:17:59.408 [2024-12-09 14:50:37.491744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:59.669 [2024-12-09 14:50:37.577233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.669 [2024-12-09 14:50:37.577326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.243 14:50:38 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.243 14:50:38 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:00.243 14:50:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:00.243 14:50:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.243 14:50:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.243 14:50:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.243 ************************************ 00:18:00.243 START TEST test_create_ublk 00:18:00.243 ************************************ 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:00.243 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.243 [2024-12-09 14:50:38.185823] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:00.243 [2024-12-09 14:50:38.187551] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.243 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:00.243 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.243 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:00.243 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.243 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.506 [2024-12-09 14:50:38.367941] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:00.506 [2024-12-09 14:50:38.368269] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:00.506 [2024-12-09 14:50:38.368283] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:00.506 [2024-12-09 14:50:38.368289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:00.506 [2024-12-09 14:50:38.375840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:00.506 [2024-12-09 14:50:38.375858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:00.506 [2024-12-09 14:50:38.383825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:00.506 [2024-12-09 14:50:38.384351] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:00.506 [2024-12-09 14:50:38.414838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:00.506 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:00.506 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.506 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.506 14:50:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:00.506 { 00:18:00.506 "ublk_device": "/dev/ublkb0", 00:18:00.506 "id": 0, 00:18:00.506 "queue_depth": 512, 00:18:00.506 "num_queues": 4, 00:18:00.506 "bdev_name": "Malloc0" 00:18:00.506 } 00:18:00.506 ]' 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:00.506 14:50:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:00.768 fio: verification read phase will never start because write phase uses all of runtime 00:18:00.768 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:00.768 fio-3.35 00:18:00.768 Starting 1 process 00:18:10.744 00:18:10.744 fio_test: (groupid=0, jobs=1): err= 0: pid=75149: Mon Dec 9 14:50:48 2024 00:18:10.744 write: IOPS=14.4k, BW=56.2MiB/s (58.9MB/s)(562MiB/10001msec); 0 zone resets 00:18:10.744 clat (usec): min=47, max=8772, avg=68.79, stdev=139.79 00:18:10.744 lat (usec): min=47, max=8789, avg=69.21, stdev=139.81 00:18:10.744 clat percentiles (usec): 00:18:10.744 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:18:10.744 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 64], 00:18:10.744 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 74], 00:18:10.744 | 99.00th=[ 85], 99.50th=[ 190], 99.90th=[ 3163], 99.95th=[ 3589], 00:18:10.744 | 99.99th=[ 4080] 00:18:10.744 bw ( KiB/s): min=22416, max=63224, per=99.84%, avg=57444.63, stdev=9736.29, samples=19 00:18:10.744 iops : min= 5604, max=15806, avg=14361.16, stdev=2434.07, samples=19 00:18:10.744 lat (usec) : 50=0.21%, 100=99.16%, 250=0.33%, 500=0.05%, 750=0.01% 00:18:10.744 lat (usec) : 1000=0.02% 00:18:10.744 lat (msec) : 2=0.07%, 4=0.15%, 10=0.02% 00:18:10.744 cpu : usr=2.01%, sys=11.84%, ctx=143865, majf=0, minf=798 00:18:10.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.744 issued rwts: total=0,143860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.744 00:18:10.744 Run status group 0 (all jobs): 00:18:10.744 WRITE: bw=56.2MiB/s (58.9MB/s), 56.2MiB/s-56.2MiB/s (58.9MB/s-58.9MB/s), io=562MiB (589MB), run=10001-10001msec 00:18:10.744 00:18:10.744 Disk stats (read/write): 00:18:10.744 ublkb0: ios=0/142237, merge=0/0, ticks=0/8383, in_queue=8384, util=99.10% 00:18:10.744 14:50:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:10.744 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.744 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.744 [2024-12-09 14:50:48.830585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:11.003 [2024-12-09 14:50:48.876431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:11.003 [2024-12-09 14:50:48.877345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:11.003 [2024-12-09 14:50:48.881891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:11.003 [2024-12-09 14:50:48.882126] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:11.003 [2024-12-09 14:50:48.882135] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.003 14:50:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.003 [2024-12-09 14:50:48.903906] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:11.003 request: 00:18:11.003 { 00:18:11.003 "ublk_id": 0, 00:18:11.003 "method": "ublk_stop_disk", 00:18:11.003 "req_id": 1 00:18:11.003 } 00:18:11.003 Got JSON-RPC error response 00:18:11.003 response: 00:18:11.003 { 00:18:11.003 "code": -19, 00:18:11.003 "message": "No such device" 00:18:11.003 } 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.003 14:50:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.003 [2024-12-09 14:50:48.921885] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:11.003 [2024-12-09 14:50:48.929815] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:11.003 [2024-12-09 14:50:48.929850] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.003 14:50:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.003 14:50:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.261 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.261 14:50:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:11.261 14:50:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:11.261 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.261 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.261 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.261 14:50:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:11.261 14:50:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:11.261 14:50:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:11.261 14:50:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:11.261 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.261 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.261 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.261 14:50:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:11.261 14:50:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:11.520 ************************************ 00:18:11.520 END TEST test_create_ublk 00:18:11.520 ************************************ 00:18:11.520 14:50:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:11.520 00:18:11.520 real 0m11.209s 00:18:11.520 user 0m0.497s 00:18:11.520 sys 0m1.256s 00:18:11.520 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.520 14:50:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.520 14:50:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:11.520 14:50:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:11.520 14:50:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.520 14:50:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.520 ************************************ 00:18:11.520 START TEST test_create_multi_ublk 00:18:11.520 ************************************ 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.520 [2024-12-09 14:50:49.437818] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:11.520 [2024-12-09 14:50:49.439487] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.520 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.778 [2024-12-09 14:50:49.677933] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:11.778 [2024-12-09 14:50:49.678267] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:11.778 [2024-12-09 14:50:49.678279] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:11.778 [2024-12-09 14:50:49.678288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:11.778 [2024-12-09 14:50:49.697822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:11.778 [2024-12-09 14:50:49.697844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:11.778 [2024-12-09 14:50:49.709824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:11.778 [2024-12-09 14:50:49.710364] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:11.778 [2024-12-09 14:50:49.739825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.778 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.037 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.037 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:12.037 14:50:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:12.037 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.037 14:50:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.037 [2024-12-09 14:50:49.962920] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:12.037 [2024-12-09 14:50:49.963237] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:12.037 [2024-12-09 14:50:49.963250] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:12.037 [2024-12-09 14:50:49.963256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:12.037 [2024-12-09 14:50:49.972826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:12.037 [2024-12-09 14:50:49.972843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:12.037 [2024-12-09 14:50:49.978822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:12.037 [2024-12-09 14:50:49.979340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:12.037 [2024-12-09 14:50:49.995822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:12.037 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.037 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:12.037 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:12.037 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:12.037 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.037 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.295 [2024-12-09 14:50:50.170921] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:12.295 [2024-12-09 14:50:50.171246] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:12.295 [2024-12-09 14:50:50.171258] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:12.295 [2024-12-09 14:50:50.171265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:12.295 [2024-12-09 14:50:50.178831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:12.295 [2024-12-09 14:50:50.178850] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:12.295 [2024-12-09 14:50:50.186832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:12.295 [2024-12-09 14:50:50.187366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:12.295 [2024-12-09 14:50:50.195855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.295 [2024-12-09 14:50:50.369940] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:12.295 [2024-12-09 14:50:50.370256] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:12.295 [2024-12-09 14:50:50.370269] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:12.295 [2024-12-09 14:50:50.370275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:12.295 [2024-12-09 14:50:50.377852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:12.295 [2024-12-09 14:50:50.377868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:12.295 [2024-12-09 14:50:50.385832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:12.295 [2024-12-09 14:50:50.386346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:12.295 [2024-12-09 14:50:50.402830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.295 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:12.554 { 00:18:12.554 "ublk_device": "/dev/ublkb0", 00:18:12.554 "id": 0, 00:18:12.554 "queue_depth": 512, 00:18:12.554 "num_queues": 4, 00:18:12.554 "bdev_name": "Malloc0" 00:18:12.554 }, 00:18:12.554 { 00:18:12.554 "ublk_device": "/dev/ublkb1", 00:18:12.554 "id": 1, 00:18:12.554 "queue_depth": 512, 00:18:12.554 "num_queues": 4, 00:18:12.554 "bdev_name": "Malloc1" 00:18:12.554 }, 00:18:12.554 { 00:18:12.554 "ublk_device": "/dev/ublkb2", 00:18:12.554 "id": 2, 00:18:12.554 "queue_depth": 512, 00:18:12.554 "num_queues": 4, 00:18:12.554 "bdev_name": "Malloc2" 00:18:12.554 }, 00:18:12.554 { 00:18:12.554 "ublk_device": "/dev/ublkb3", 00:18:12.554 "id": 3, 00:18:12.554 "queue_depth": 512, 00:18:12.554 "num_queues": 4, 00:18:12.554 "bdev_name": "Malloc3" 00:18:12.554 } 00:18:12.554 ]' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:12.554 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:12.812 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:13.071 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:13.071 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:13.071 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:13.071 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:13.071 14:50:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.071 [2024-12-09 14:50:51.041898] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:13.071 [2024-12-09 14:50:51.090420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:13.071 [2024-12-09 14:50:51.091506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:13.071 [2024-12-09 14:50:51.097835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:13.071 [2024-12-09 14:50:51.098087] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:13.071 [2024-12-09 14:50:51.098101] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.071 [2024-12-09 14:50:51.113894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:13.071 [2024-12-09 14:50:51.154389] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:13.071 [2024-12-09 14:50:51.155413] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:13.071 [2024-12-09 14:50:51.161828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:13.071 [2024-12-09 14:50:51.162070] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:13.071 [2024-12-09 14:50:51.162084] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.071 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.072 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:13.072 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.072 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.072 [2024-12-09 14:50:51.175914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:13.330 [2024-12-09 14:50:51.211395] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:13.330 [2024-12-09 14:50:51.212403] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:13.330 [2024-12-09 14:50:51.217832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:13.330 [2024-12-09 14:50:51.218055] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:13.330 [2024-12-09 14:50:51.218069] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:13.330 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.330 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.330 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:13.330 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.330 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.330 [2024-12-09 14:50:51.233885] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:13.330 [2024-12-09 14:50:51.263325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:13.330 [2024-12-09 14:50:51.264285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:13.330 [2024-12-09 14:50:51.273831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:13.330 [2024-12-09 14:50:51.274049] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:13.330 [2024-12-09 14:50:51.274061] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:13.330 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.330 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:13.588 [2024-12-09 14:50:51.465873] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:13.588 [2024-12-09 14:50:51.473817] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:13.588 [2024-12-09 14:50:51.473844] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:13.588 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:13.588 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.588 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:13.588 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.588 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.846 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.846 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.846 14:50:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:13.846 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.846 14:50:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.413 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:14.671 ************************************ 00:18:14.671 END TEST test_create_multi_ublk 00:18:14.671 ************************************ 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:14.671 00:18:14.671 real 0m3.276s 00:18:14.671 user 0m0.798s 00:18:14.671 sys 0m0.132s 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.671 14:50:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.671 14:50:52 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:14.671 14:50:52 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:14.671 14:50:52 ublk -- ublk/ublk.sh@130 -- # killprocess 75110 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@954 -- # '[' -z 75110 ']' 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@958 -- # kill -0 75110 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@959 -- # uname 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75110 00:18:14.671 killing process with pid 75110 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75110' 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@973 -- # kill 75110 00:18:14.671 14:50:52 ublk -- common/autotest_common.sh@978 -- # wait 75110 00:18:15.238 [2024-12-09 14:50:53.307162] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:15.238 [2024-12-09 14:50:53.307215] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:16.176 00:18:16.176 real 0m24.899s 00:18:16.176 user 0m34.779s 00:18:16.176 sys 0m9.890s 00:18:16.176 14:50:53 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.176 14:50:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 ************************************ 00:18:16.176 END TEST ublk 00:18:16.176 ************************************ 00:18:16.176 14:50:54 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:16.176 14:50:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.176 14:50:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.176 14:50:54 -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 ************************************ 00:18:16.176 START TEST ublk_recovery 00:18:16.176 ************************************ 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:16.176 * Looking for test storage... 00:18:16.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.176 14:50:54 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:16.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.176 --rc genhtml_branch_coverage=1 00:18:16.176 --rc genhtml_function_coverage=1 00:18:16.176 --rc genhtml_legend=1 00:18:16.176 --rc geninfo_all_blocks=1 00:18:16.176 --rc geninfo_unexecuted_blocks=1 00:18:16.176 00:18:16.176 ' 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:16.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.176 --rc genhtml_branch_coverage=1 00:18:16.176 --rc genhtml_function_coverage=1 00:18:16.176 --rc genhtml_legend=1 00:18:16.176 --rc geninfo_all_blocks=1 00:18:16.176 --rc geninfo_unexecuted_blocks=1 00:18:16.176 00:18:16.176 ' 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:16.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.176 --rc genhtml_branch_coverage=1 00:18:16.176 --rc genhtml_function_coverage=1 00:18:16.176 --rc genhtml_legend=1 00:18:16.176 --rc geninfo_all_blocks=1 00:18:16.176 --rc geninfo_unexecuted_blocks=1 00:18:16.176 00:18:16.176 ' 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:16.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.176 --rc genhtml_branch_coverage=1 00:18:16.176 --rc genhtml_function_coverage=1 00:18:16.176 --rc genhtml_legend=1 00:18:16.176 --rc geninfo_all_blocks=1 00:18:16.176 --rc geninfo_unexecuted_blocks=1 00:18:16.176 00:18:16.176 ' 00:18:16.176 14:50:54 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:16.176 14:50:54 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:16.176 14:50:54 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:16.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.176 14:50:54 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75494 00:18:16.176 14:50:54 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.176 14:50:54 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75494 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75494 ']' 00:18:16.176 14:50:54 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.176 14:50:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 [2024-12-09 14:50:54.290294] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:18:16.176 [2024-12-09 14:50:54.290636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75494 ] 00:18:16.435 [2024-12-09 14:50:54.447920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:16.435 [2024-12-09 14:50:54.535904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.435 [2024-12-09 14:50:54.535950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.001 14:50:55 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.001 14:50:55 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:17.001 14:50:55 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:17.002 14:50:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.002 14:50:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 [2024-12-09 14:50:55.074823] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:17.002 [2024-12-09 14:50:55.076500] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:17.002 14:50:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.002 14:50:55 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:17.002 14:50:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.002 14:50:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:17.259 malloc0 00:18:17.259 14:50:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.259 14:50:55 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:17.259 14:50:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.259 14:50:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:17.259 [2024-12-09 14:50:55.162931] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:17.259 [2024-12-09 14:50:55.163017] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:17.259 [2024-12-09 14:50:55.163027] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:17.259 [2024-12-09 14:50:55.163033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:17.259 [2024-12-09 14:50:55.171920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:17.259 [2024-12-09 14:50:55.171938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:17.259 [2024-12-09 14:50:55.178826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:17.259 [2024-12-09 14:50:55.178944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:17.259 [2024-12-09 14:50:55.195835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:17.259 1 00:18:17.259 14:50:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.259 14:50:55 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:18.191 14:50:56 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75530 00:18:18.191 14:50:56 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:18.191 14:50:56 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:18.191 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.191 fio-3.35 00:18:18.191 Starting 1 process 00:18:23.458 14:51:01 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75494 00:18:23.458 14:51:01 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:28.751 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75494 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:28.751 14:51:06 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75641 00:18:28.751 14:51:06 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.751 14:51:06 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75641 00:18:28.751 14:51:06 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:28.751 14:51:06 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75641 ']' 00:18:28.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.751 14:51:06 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.751 14:51:06 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.751 14:51:06 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.751 14:51:06 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.751 14:51:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.751 [2024-12-09 14:51:06.308734] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:18:28.751 [2024-12-09 14:51:06.309478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75641 ] 00:18:28.751 [2024-12-09 14:51:06.482594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:28.751 [2024-12-09 14:51:06.629770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.751 [2024-12-09 14:51:06.629931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.325 14:51:07 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.325 14:51:07 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:29.325 14:51:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:29.325 14:51:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.325 14:51:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.325 [2024-12-09 14:51:07.436840] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:29.325 [2024-12-09 14:51:07.439521] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:29.325 14:51:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.325 14:51:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:29.325 14:51:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.325 14:51:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.586 malloc0 00:18:29.586 14:51:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.586 14:51:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:29.586 14:51:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.586 14:51:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.587 [2024-12-09 14:51:07.573049] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:29.587 [2024-12-09 14:51:07.573099] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:29.587 [2024-12-09 14:51:07.573112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:29.587 [2024-12-09 14:51:07.577874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:29.587 [2024-12-09 14:51:07.577909] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:29.587 1 00:18:29.587 14:51:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.587 14:51:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75530 00:18:30.526 [2024-12-09 14:51:08.577952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:30.526 [2024-12-09 14:51:08.587840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:30.526 [2024-12-09 14:51:08.587860] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:31.489 [2024-12-09 14:51:09.587883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:31.489 [2024-12-09 14:51:09.597823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:31.489 [2024-12-09 14:51:09.597842] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:32.873 [2024-12-09 14:51:10.601821] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:32.873 [2024-12-09 14:51:10.606821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:32.873 [2024-12-09 14:51:10.606836] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:32.873 [2024-12-09 14:51:10.606845] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:32.873 [2024-12-09 14:51:10.606927] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:54.800 [2024-12-09 14:51:31.725840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:54.800 [2024-12-09 14:51:31.729880] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:54.800 [2024-12-09 14:51:31.735046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:54.800 [2024-12-09 14:51:31.735065] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:21.373 00:19:21.373 fio_test: (groupid=0, jobs=1): err= 0: pid=75533: Mon Dec 9 14:51:56 2024 00:19:21.373 read: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(3157MiB/60002msec) 00:19:21.373 slat (nsec): min=1293, max=1700.3k, avg=5647.61, stdev=2439.59 00:19:21.374 clat (usec): min=1099, max=30535k, avg=4740.10, stdev=273789.49 00:19:21.374 lat (usec): min=1108, max=30535k, avg=4745.74, stdev=273789.49 00:19:21.374 clat percentiles (usec): 00:19:21.374 | 1.00th=[ 1893], 5.00th=[ 2040], 10.00th=[ 2073], 20.00th=[ 2114], 00:19:21.374 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[ 2147], 60.00th=[ 2180], 00:19:21.374 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3228], 00:19:21.374 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 8160], 99.95th=[ 8586], 00:19:21.374 | 99.99th=[13173] 00:19:21.374 bw ( KiB/s): min=21176, max=114352, per=100.00%, avg=107895.73, stdev=15443.28, samples=59 00:19:21.374 iops : min= 5294, max=28588, avg=26973.93, stdev=3860.82, samples=59 00:19:21.374 write: IOPS=13.5k, BW=52.5MiB/s (55.1MB/s)(3153MiB/60002msec); 0 zone resets 00:19:21.374 slat (nsec): min=1529, max=3308.1k, avg=5868.68, stdev=3986.54 00:19:21.374 clat (usec): min=1127, max=30535k, avg=4755.91, stdev=269732.89 00:19:21.374 lat (usec): min=1133, max=30535k, avg=4761.78, stdev=269732.89 00:19:21.374 clat percentiles (usec): 00:19:21.374 | 1.00th=[ 1942], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2212], 00:19:21.374 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:19:21.374 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2376], 95.00th=[ 3195], 00:19:21.374 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 8225], 99.95th=[ 8455], 00:19:21.374 | 99.99th=[13304] 00:19:21.374 bw ( KiB/s): min=20800, max=113616, per=100.00%, avg=107715.93, stdev=15287.14, samples=59 00:19:21.374 iops : min= 5200, max=28404, avg=26928.98, stdev=3821.79, samples=59 00:19:21.374 lat (msec) : 2=2.02%, 4=95.15%, 10=2.80%, 20=0.03%, >=2000=0.01% 00:19:21.374 cpu : usr=3.12%, sys=15.82%, ctx=52996, majf=0, minf=13 00:19:21.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.374 issued rwts: total=808269,807154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.374 00:19:21.374 Run status group 0 (all jobs): 00:19:21.374 READ: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=3157MiB (3311MB), run=60002-60002msec 00:19:21.374 WRITE: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=3153MiB (3306MB), run=60002-60002msec 00:19:21.374 00:19:21.374 Disk stats (read/write): 00:19:21.374 ublkb1: ios=805303/804128, merge=0/0, ticks=3777822/3714033, in_queue=7491856, util=99.92% 00:19:21.374 14:51:56 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.374 [2024-12-09 14:51:56.449895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:21.374 [2024-12-09 14:51:56.487946] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:21.374 [2024-12-09 14:51:56.488191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:21.374 [2024-12-09 14:51:56.496843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:21.374 [2024-12-09 14:51:56.500911] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:21.374 [2024-12-09 14:51:56.500925] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.374 14:51:56 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.374 [2024-12-09 14:51:56.504956] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:21.374 [2024-12-09 14:51:56.511833] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:21.374 [2024-12-09 14:51:56.511864] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.374 14:51:56 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:21.374 14:51:56 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:21.374 14:51:56 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75641 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75641 ']' 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75641 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75641 00:19:21.374 killing process with pid 75641 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75641' 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75641 00:19:21.374 14:51:56 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75641 00:19:21.374 [2024-12-09 14:51:57.602107] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:21.374 [2024-12-09 14:51:57.602158] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:21.374 ************************************ 00:19:21.374 END TEST ublk_recovery 00:19:21.374 ************************************ 00:19:21.374 00:19:21.374 real 1m4.297s 00:19:21.374 user 1m44.183s 00:19:21.374 sys 0m24.914s 00:19:21.374 14:51:58 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.374 14:51:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.374 14:51:58 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:21.374 14:51:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:21.374 14:51:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.374 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:19:21.374 14:51:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:21.374 14:51:58 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:21.374 14:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.374 14:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.374 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:19:21.374 ************************************ 00:19:21.374 START TEST ftl 00:19:21.374 ************************************ 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:21.374 * Looking for test storage... 00:19:21.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.374 14:51:58 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.374 14:51:58 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.374 14:51:58 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.374 14:51:58 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.374 14:51:58 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.374 14:51:58 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.374 14:51:58 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.374 14:51:58 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:21.374 14:51:58 ftl -- scripts/common.sh@345 -- # : 1 00:19:21.374 14:51:58 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.374 14:51:58 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.374 14:51:58 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:21.374 14:51:58 ftl -- scripts/common.sh@353 -- # local d=1 00:19:21.374 14:51:58 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.374 14:51:58 ftl -- scripts/common.sh@355 -- # echo 1 00:19:21.374 14:51:58 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.374 14:51:58 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@353 -- # local d=2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.374 14:51:58 ftl -- scripts/common.sh@355 -- # echo 2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.374 14:51:58 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.374 14:51:58 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.374 14:51:58 ftl -- scripts/common.sh@368 -- # return 0 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:21.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.374 --rc genhtml_branch_coverage=1 00:19:21.374 --rc genhtml_function_coverage=1 00:19:21.374 --rc genhtml_legend=1 00:19:21.374 --rc geninfo_all_blocks=1 00:19:21.374 --rc geninfo_unexecuted_blocks=1 00:19:21.374 00:19:21.374 ' 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:21.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.374 --rc genhtml_branch_coverage=1 00:19:21.374 --rc genhtml_function_coverage=1 00:19:21.374 --rc genhtml_legend=1 00:19:21.374 --rc geninfo_all_blocks=1 00:19:21.374 --rc geninfo_unexecuted_blocks=1 00:19:21.374 00:19:21.374 ' 00:19:21.374 14:51:58 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:21.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.374 --rc genhtml_branch_coverage=1 00:19:21.375 --rc genhtml_function_coverage=1 00:19:21.375 --rc genhtml_legend=1 00:19:21.375 --rc geninfo_all_blocks=1 00:19:21.375 --rc geninfo_unexecuted_blocks=1 00:19:21.375 00:19:21.375 ' 00:19:21.375 14:51:58 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:21.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.375 --rc genhtml_branch_coverage=1 00:19:21.375 --rc genhtml_function_coverage=1 00:19:21.375 --rc genhtml_legend=1 00:19:21.375 --rc geninfo_all_blocks=1 00:19:21.375 --rc geninfo_unexecuted_blocks=1 00:19:21.375 00:19:21.375 ' 00:19:21.375 14:51:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:21.375 14:51:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:21.375 14:51:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.375 14:51:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:21.375 14:51:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:21.375 14:51:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:21.375 14:51:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.375 14:51:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:21.375 14:51:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:21.375 14:51:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.375 14:51:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.375 14:51:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:21.375 14:51:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:21.375 14:51:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:21.375 14:51:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:21.375 14:51:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:21.375 14:51:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:21.375 14:51:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.375 14:51:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:21.375 14:51:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:21.375 14:51:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:21.375 14:51:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:21.375 14:51:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:21.375 14:51:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:21.375 14:51:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:21.375 14:51:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:21.375 14:51:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:21.375 14:51:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:21.375 14:51:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:21.375 14:51:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.375 14:51:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:21.375 14:51:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:21.375 14:51:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:21.375 14:51:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:21.375 14:51:58 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:21.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:21.375 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.375 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.375 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.375 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.375 14:51:59 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76446 00:19:21.375 14:51:59 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76446 00:19:21.375 14:51:59 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:21.375 14:51:59 ftl -- common/autotest_common.sh@835 -- # '[' -z 76446 ']' 00:19:21.375 14:51:59 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.375 14:51:59 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.375 14:51:59 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.375 14:51:59 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.375 14:51:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:21.375 [2024-12-09 14:51:59.269187] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:21.375 [2024-12-09 14:51:59.269648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76446 ] 00:19:21.375 [2024-12-09 14:51:59.438174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.636 [2024-12-09 14:51:59.582086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.209 14:52:00 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.209 14:52:00 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:22.209 14:52:00 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:22.209 14:52:00 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:23.595 14:52:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:23.595 14:52:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:23.856 14:52:01 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:23.856 14:52:01 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:23.856 14:52:01 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@50 -- # break 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:24.118 14:52:01 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:24.118 14:52:02 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:24.118 14:52:02 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:24.118 14:52:02 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:24.118 14:52:02 ftl -- ftl/ftl.sh@63 -- # break 00:19:24.118 14:52:02 ftl -- ftl/ftl.sh@66 -- # killprocess 76446 00:19:24.118 14:52:02 ftl -- common/autotest_common.sh@954 -- # '[' -z 76446 ']' 00:19:24.118 14:52:02 ftl -- common/autotest_common.sh@958 -- # kill -0 76446 00:19:24.118 14:52:02 ftl -- common/autotest_common.sh@959 -- # uname 00:19:24.118 14:52:02 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.118 14:52:02 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76446 00:19:24.379 killing process with pid 76446 00:19:24.379 14:52:02 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.379 14:52:02 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.379 14:52:02 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76446' 00:19:24.379 14:52:02 ftl -- common/autotest_common.sh@973 -- # kill 76446 00:19:24.379 14:52:02 ftl -- common/autotest_common.sh@978 -- # wait 76446 00:19:25.766 14:52:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:25.766 14:52:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:25.766 14:52:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:25.766 14:52:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.766 14:52:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:25.766 ************************************ 00:19:25.766 START TEST ftl_fio_basic 00:19:25.766 ************************************ 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:25.766 * Looking for test storage... 00:19:25.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.766 --rc genhtml_branch_coverage=1 00:19:25.766 --rc genhtml_function_coverage=1 00:19:25.766 --rc genhtml_legend=1 00:19:25.766 --rc geninfo_all_blocks=1 00:19:25.766 --rc geninfo_unexecuted_blocks=1 00:19:25.766 00:19:25.766 ' 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.766 --rc genhtml_branch_coverage=1 00:19:25.766 --rc genhtml_function_coverage=1 00:19:25.766 --rc genhtml_legend=1 00:19:25.766 --rc geninfo_all_blocks=1 00:19:25.766 --rc geninfo_unexecuted_blocks=1 00:19:25.766 00:19:25.766 ' 00:19:25.766 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.766 --rc genhtml_branch_coverage=1 00:19:25.766 --rc genhtml_function_coverage=1 00:19:25.766 --rc genhtml_legend=1 00:19:25.766 --rc geninfo_all_blocks=1 00:19:25.766 --rc geninfo_unexecuted_blocks=1 00:19:25.766 00:19:25.766 ' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:25.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.767 --rc genhtml_branch_coverage=1 00:19:25.767 --rc genhtml_function_coverage=1 00:19:25.767 --rc genhtml_legend=1 00:19:25.767 --rc geninfo_all_blocks=1 00:19:25.767 --rc geninfo_unexecuted_blocks=1 00:19:25.767 00:19:25.767 ' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76579 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76579 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76579 ']' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.767 14:52:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:25.767 [2024-12-09 14:52:03.783310] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:19:25.767 [2024-12-09 14:52:03.783555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76579 ] 00:19:26.028 [2024-12-09 14:52:03.946907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:26.028 [2024-12-09 14:52:04.089548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.028 [2024-12-09 14:52:04.089932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.028 [2024-12-09 14:52:04.089937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:26.972 14:52:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:27.233 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.494 { 00:19:27.494 "name": "nvme0n1", 00:19:27.494 "aliases": [ 00:19:27.494 "6fc2bfd3-c0f2-406e-b868-15f63bb37c5b" 00:19:27.494 ], 00:19:27.494 "product_name": "NVMe disk", 00:19:27.494 "block_size": 4096, 00:19:27.494 "num_blocks": 1310720, 00:19:27.494 "uuid": "6fc2bfd3-c0f2-406e-b868-15f63bb37c5b", 00:19:27.494 "numa_id": -1, 00:19:27.494 "assigned_rate_limits": { 00:19:27.494 "rw_ios_per_sec": 0, 00:19:27.494 "rw_mbytes_per_sec": 0, 00:19:27.494 "r_mbytes_per_sec": 0, 00:19:27.494 "w_mbytes_per_sec": 0 00:19:27.494 }, 00:19:27.494 "claimed": false, 00:19:27.494 "zoned": false, 00:19:27.494 "supported_io_types": { 00:19:27.494 "read": true, 00:19:27.494 "write": true, 00:19:27.494 "unmap": true, 00:19:27.494 "flush": true, 00:19:27.494 "reset": true, 00:19:27.494 "nvme_admin": true, 00:19:27.494 "nvme_io": true, 00:19:27.494 "nvme_io_md": false, 00:19:27.494 "write_zeroes": true, 00:19:27.494 "zcopy": false, 00:19:27.494 "get_zone_info": false, 00:19:27.494 "zone_management": false, 00:19:27.494 "zone_append": false, 00:19:27.494 "compare": true, 00:19:27.494 "compare_and_write": false, 00:19:27.494 "abort": true, 00:19:27.494 "seek_hole": false, 00:19:27.494 "seek_data": false, 00:19:27.494 "copy": true, 00:19:27.494 "nvme_iov_md": false 00:19:27.494 }, 00:19:27.494 "driver_specific": { 00:19:27.494 "nvme": [ 00:19:27.494 { 00:19:27.494 "pci_address": "0000:00:11.0", 00:19:27.494 "trid": { 00:19:27.494 "trtype": "PCIe", 00:19:27.494 "traddr": "0000:00:11.0" 00:19:27.494 }, 00:19:27.494 "ctrlr_data": { 00:19:27.494 "cntlid": 0, 00:19:27.494 "vendor_id": "0x1b36", 00:19:27.494 "model_number": "QEMU NVMe Ctrl", 00:19:27.494 "serial_number": "12341", 00:19:27.494 "firmware_revision": "8.0.0", 00:19:27.494 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:27.494 "oacs": { 00:19:27.494 "security": 0, 00:19:27.494 "format": 1, 00:19:27.494 "firmware": 0, 00:19:27.494 "ns_manage": 1 00:19:27.494 }, 00:19:27.494 "multi_ctrlr": false, 00:19:27.494 "ana_reporting": false 00:19:27.494 }, 00:19:27.494 "vs": { 00:19:27.494 "nvme_version": "1.4" 00:19:27.494 }, 00:19:27.494 "ns_data": { 00:19:27.494 "id": 1, 00:19:27.494 "can_share": false 00:19:27.494 } 00:19:27.494 } 00:19:27.494 ], 00:19:27.494 "mp_policy": "active_passive" 00:19:27.494 } 00:19:27.494 } 00:19:27.494 ]' 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:27.494 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:27.755 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:27.755 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:27.755 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=9770be25-1d21-48e5-a634-4c01a7baefbb 00:19:27.755 14:52:05 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9770be25-1d21-48e5-a634-4c01a7baefbb 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:28.015 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:28.275 { 00:19:28.275 "name": "1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c", 00:19:28.275 "aliases": [ 00:19:28.275 "lvs/nvme0n1p0" 00:19:28.275 ], 00:19:28.275 "product_name": "Logical Volume", 00:19:28.275 "block_size": 4096, 00:19:28.275 "num_blocks": 26476544, 00:19:28.275 "uuid": "1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c", 00:19:28.275 "assigned_rate_limits": { 00:19:28.275 "rw_ios_per_sec": 0, 00:19:28.275 "rw_mbytes_per_sec": 0, 00:19:28.275 "r_mbytes_per_sec": 0, 00:19:28.275 "w_mbytes_per_sec": 0 00:19:28.275 }, 00:19:28.275 "claimed": false, 00:19:28.275 "zoned": false, 00:19:28.275 "supported_io_types": { 00:19:28.275 "read": true, 00:19:28.275 "write": true, 00:19:28.275 "unmap": true, 00:19:28.275 "flush": false, 00:19:28.275 "reset": true, 00:19:28.275 "nvme_admin": false, 00:19:28.275 "nvme_io": false, 00:19:28.275 "nvme_io_md": false, 00:19:28.275 "write_zeroes": true, 00:19:28.275 "zcopy": false, 00:19:28.275 "get_zone_info": false, 00:19:28.275 "zone_management": false, 00:19:28.275 "zone_append": false, 00:19:28.275 "compare": false, 00:19:28.275 "compare_and_write": false, 00:19:28.275 "abort": false, 00:19:28.275 "seek_hole": true, 00:19:28.275 "seek_data": true, 00:19:28.275 "copy": false, 00:19:28.275 "nvme_iov_md": false 00:19:28.275 }, 00:19:28.275 "driver_specific": { 00:19:28.275 "lvol": { 00:19:28.275 "lvol_store_uuid": "9770be25-1d21-48e5-a634-4c01a7baefbb", 00:19:28.275 "base_bdev": "nvme0n1", 00:19:28.275 "thin_provision": true, 00:19:28.275 "num_allocated_clusters": 0, 00:19:28.275 "snapshot": false, 00:19:28.275 "clone": false, 00:19:28.275 "esnap_clone": false 00:19:28.275 } 00:19:28.275 } 00:19:28.275 } 00:19:28.275 ]' 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:28.275 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:28.533 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:28.791 { 00:19:28.791 "name": "1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c", 00:19:28.791 "aliases": [ 00:19:28.791 "lvs/nvme0n1p0" 00:19:28.791 ], 00:19:28.791 "product_name": "Logical Volume", 00:19:28.791 "block_size": 4096, 00:19:28.791 "num_blocks": 26476544, 00:19:28.791 "uuid": "1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c", 00:19:28.791 "assigned_rate_limits": { 00:19:28.791 "rw_ios_per_sec": 0, 00:19:28.791 "rw_mbytes_per_sec": 0, 00:19:28.791 "r_mbytes_per_sec": 0, 00:19:28.791 "w_mbytes_per_sec": 0 00:19:28.791 }, 00:19:28.791 "claimed": false, 00:19:28.791 "zoned": false, 00:19:28.791 "supported_io_types": { 00:19:28.791 "read": true, 00:19:28.791 "write": true, 00:19:28.791 "unmap": true, 00:19:28.791 "flush": false, 00:19:28.791 "reset": true, 00:19:28.791 "nvme_admin": false, 00:19:28.791 "nvme_io": false, 00:19:28.791 "nvme_io_md": false, 00:19:28.791 "write_zeroes": true, 00:19:28.791 "zcopy": false, 00:19:28.791 "get_zone_info": false, 00:19:28.791 "zone_management": false, 00:19:28.791 "zone_append": false, 00:19:28.791 "compare": false, 00:19:28.791 "compare_and_write": false, 00:19:28.791 "abort": false, 00:19:28.791 "seek_hole": true, 00:19:28.791 "seek_data": true, 00:19:28.791 "copy": false, 00:19:28.791 "nvme_iov_md": false 00:19:28.791 }, 00:19:28.791 "driver_specific": { 00:19:28.791 "lvol": { 00:19:28.791 "lvol_store_uuid": "9770be25-1d21-48e5-a634-4c01a7baefbb", 00:19:28.791 "base_bdev": "nvme0n1", 00:19:28.791 "thin_provision": true, 00:19:28.791 "num_allocated_clusters": 0, 00:19:28.791 "snapshot": false, 00:19:28.791 "clone": false, 00:19:28.791 "esnap_clone": false 00:19:28.791 } 00:19:28.791 } 00:19:28.791 } 00:19:28.791 ]' 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:28.791 14:52:06 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:29.050 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:29.050 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c 00:19:29.308 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:29.308 { 00:19:29.308 "name": "1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c", 00:19:29.308 "aliases": [ 00:19:29.308 "lvs/nvme0n1p0" 00:19:29.308 ], 00:19:29.308 "product_name": "Logical Volume", 00:19:29.308 "block_size": 4096, 00:19:29.308 "num_blocks": 26476544, 00:19:29.308 "uuid": "1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c", 00:19:29.308 "assigned_rate_limits": { 00:19:29.308 "rw_ios_per_sec": 0, 00:19:29.308 "rw_mbytes_per_sec": 0, 00:19:29.308 "r_mbytes_per_sec": 0, 00:19:29.308 "w_mbytes_per_sec": 0 00:19:29.308 }, 00:19:29.308 "claimed": false, 00:19:29.308 "zoned": false, 00:19:29.308 "supported_io_types": { 00:19:29.308 "read": true, 00:19:29.308 "write": true, 00:19:29.308 "unmap": true, 00:19:29.308 "flush": false, 00:19:29.308 "reset": true, 00:19:29.308 "nvme_admin": false, 00:19:29.308 "nvme_io": false, 00:19:29.308 "nvme_io_md": false, 00:19:29.308 "write_zeroes": true, 00:19:29.308 "zcopy": false, 00:19:29.308 "get_zone_info": false, 00:19:29.308 "zone_management": false, 00:19:29.308 "zone_append": false, 00:19:29.308 "compare": false, 00:19:29.308 "compare_and_write": false, 00:19:29.308 "abort": false, 00:19:29.308 "seek_hole": true, 00:19:29.309 "seek_data": true, 00:19:29.309 "copy": false, 00:19:29.309 "nvme_iov_md": false 00:19:29.309 }, 00:19:29.309 "driver_specific": { 00:19:29.309 "lvol": { 00:19:29.309 "lvol_store_uuid": "9770be25-1d21-48e5-a634-4c01a7baefbb", 00:19:29.309 "base_bdev": "nvme0n1", 00:19:29.309 "thin_provision": true, 00:19:29.309 "num_allocated_clusters": 0, 00:19:29.309 "snapshot": false, 00:19:29.309 "clone": false, 00:19:29.309 "esnap_clone": false 00:19:29.309 } 00:19:29.309 } 00:19:29.309 } 00:19:29.309 ]' 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:29.309 14:52:07 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c -c nvc0n1p0 --l2p_dram_limit 60 00:19:29.567 [2024-12-09 14:52:07.559208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.567 [2024-12-09 14:52:07.559246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:29.567 [2024-12-09 14:52:07.559261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:29.567 [2024-12-09 14:52:07.559268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.567 [2024-12-09 14:52:07.559316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.567 [2024-12-09 14:52:07.559327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.567 [2024-12-09 14:52:07.559337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:29.568 [2024-12-09 14:52:07.559344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.559376] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:29.568 [2024-12-09 14:52:07.559937] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:29.568 [2024-12-09 14:52:07.559959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.559967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.568 [2024-12-09 14:52:07.559976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:19:29.568 [2024-12-09 14:52:07.559982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.560093] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef982cf9-8f78-4859-9d6e-708e1b733c52 00:19:29.568 [2024-12-09 14:52:07.561386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.561413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:29.568 [2024-12-09 14:52:07.561423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:29.568 [2024-12-09 14:52:07.561432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.568271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.568364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.568 [2024-12-09 14:52:07.568435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.790 ms 00:19:29.568 [2024-12-09 14:52:07.568457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.568633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.568664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.568 [2024-12-09 14:52:07.568685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:29.568 [2024-12-09 14:52:07.568821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.568896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.568965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:29.568 [2024-12-09 14:52:07.568990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:29.568 [2024-12-09 14:52:07.569033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.569069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.568 [2024-12-09 14:52:07.572414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.572504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.568 [2024-12-09 14:52:07.572582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:19:29.568 [2024-12-09 14:52:07.572608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.572696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.572744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:29.568 [2024-12-09 14:52:07.572767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:29.568 [2024-12-09 14:52:07.572783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.572837] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:29.568 [2024-12-09 14:52:07.573012] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:29.568 [2024-12-09 14:52:07.573086] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:29.568 [2024-12-09 14:52:07.573138] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:29.568 [2024-12-09 14:52:07.573171] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:29.568 [2024-12-09 14:52:07.573262] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:29.568 [2024-12-09 14:52:07.573293] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:29.568 [2024-12-09 14:52:07.573310] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:29.568 [2024-12-09 14:52:07.573327] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:29.568 [2024-12-09 14:52:07.573343] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:29.568 [2024-12-09 14:52:07.573360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.573437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:29.568 [2024-12-09 14:52:07.573458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:19:29.568 [2024-12-09 14:52:07.573474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.573562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.568 [2024-12-09 14:52:07.573627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:29.568 [2024-12-09 14:52:07.573648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:29.568 [2024-12-09 14:52:07.573664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.568 [2024-12-09 14:52:07.573770] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:29.568 [2024-12-09 14:52:07.573793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:29.568 [2024-12-09 14:52:07.573825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.568 [2024-12-09 14:52:07.573872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.568 [2024-12-09 14:52:07.573892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:29.568 [2024-12-09 14:52:07.573907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:29.568 [2024-12-09 14:52:07.573924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:29.568 [2024-12-09 14:52:07.574002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:29.568 [2024-12-09 14:52:07.574024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.568 [2024-12-09 14:52:07.574048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:29.568 [2024-12-09 14:52:07.574054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:29.568 [2024-12-09 14:52:07.574061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.568 [2024-12-09 14:52:07.574066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:29.568 [2024-12-09 14:52:07.574073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:29.568 [2024-12-09 14:52:07.574079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:29.568 [2024-12-09 14:52:07.574093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:29.568 [2024-12-09 14:52:07.574099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:29.568 [2024-12-09 14:52:07.574112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.568 [2024-12-09 14:52:07.574124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:29.568 [2024-12-09 14:52:07.574129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.568 [2024-12-09 14:52:07.574140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:29.568 [2024-12-09 14:52:07.574148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.568 [2024-12-09 14:52:07.574159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:29.568 [2024-12-09 14:52:07.574164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.568 [2024-12-09 14:52:07.574176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:29.568 [2024-12-09 14:52:07.574184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.568 [2024-12-09 14:52:07.574209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:29.568 [2024-12-09 14:52:07.574215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:29.568 [2024-12-09 14:52:07.574222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.568 [2024-12-09 14:52:07.574227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:29.568 [2024-12-09 14:52:07.574234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:29.568 [2024-12-09 14:52:07.574239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:29.568 [2024-12-09 14:52:07.574252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:29.568 [2024-12-09 14:52:07.574258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574263] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:29.568 [2024-12-09 14:52:07.574271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:29.568 [2024-12-09 14:52:07.574277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.568 [2024-12-09 14:52:07.574284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.568 [2024-12-09 14:52:07.574292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:29.568 [2024-12-09 14:52:07.574300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:29.568 [2024-12-09 14:52:07.574308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:29.568 [2024-12-09 14:52:07.574316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:29.568 [2024-12-09 14:52:07.574321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:29.568 [2024-12-09 14:52:07.574328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:29.569 [2024-12-09 14:52:07.574335] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:29.569 [2024-12-09 14:52:07.574345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.569 [2024-12-09 14:52:07.574354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:29.569 [2024-12-09 14:52:07.574361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:29.569 [2024-12-09 14:52:07.574367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:29.569 [2024-12-09 14:52:07.574374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:29.569 [2024-12-09 14:52:07.574380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:29.569 [2024-12-09 14:52:07.574387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:29.569 [2024-12-09 14:52:07.574393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:29.569 [2024-12-09 14:52:07.574400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:29.569 [2024-12-09 14:52:07.574406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:29.569 [2024-12-09 14:52:07.574415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:29.569 [2024-12-09 14:52:07.574420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:29.569 [2024-12-09 14:52:07.574427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:29.569 [2024-12-09 14:52:07.574432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:29.569 [2024-12-09 14:52:07.574440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:29.569 [2024-12-09 14:52:07.574445] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:29.569 [2024-12-09 14:52:07.574453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.569 [2024-12-09 14:52:07.574461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:29.569 [2024-12-09 14:52:07.574469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:29.569 [2024-12-09 14:52:07.574475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:29.569 [2024-12-09 14:52:07.574483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:29.569 [2024-12-09 14:52:07.574489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.569 [2024-12-09 14:52:07.574497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:29.569 [2024-12-09 14:52:07.574503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:19:29.569 [2024-12-09 14:52:07.574511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.569 [2024-12-09 14:52:07.574560] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:29.569 [2024-12-09 14:52:07.574571] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:32.851 [2024-12-09 14:52:10.256906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.257139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:32.851 [2024-12-09 14:52:10.257326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2682.336 ms 00:19:32.851 [2024-12-09 14:52:10.257356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.285471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.285651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.851 [2024-12-09 14:52:10.285717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.825 ms 00:19:32.851 [2024-12-09 14:52:10.285744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.286044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.286065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.851 [2024-12-09 14:52:10.286075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:32.851 [2024-12-09 14:52:10.286088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.334290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.334440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.851 [2024-12-09 14:52:10.334557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.138 ms 00:19:32.851 [2024-12-09 14:52:10.334587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.334645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.334769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.851 [2024-12-09 14:52:10.334813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:32.851 [2024-12-09 14:52:10.334959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.335424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.335536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.851 [2024-12-09 14:52:10.335591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:19:32.851 [2024-12-09 14:52:10.335620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.335752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.335764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.851 [2024-12-09 14:52:10.335773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:32.851 [2024-12-09 14:52:10.335785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.351766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.351816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.851 [2024-12-09 14:52:10.351827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.943 ms 00:19:32.851 [2024-12-09 14:52:10.351838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.364069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:32.851 [2024-12-09 14:52:10.381255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.381288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.851 [2024-12-09 14:52:10.381304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.307 ms 00:19:32.851 [2024-12-09 14:52:10.381312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.437299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.851 [2024-12-09 14:52:10.437430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:32.851 [2024-12-09 14:52:10.437453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.951 ms 00:19:32.851 [2024-12-09 14:52:10.437462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.851 [2024-12-09 14:52:10.437645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.437656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.852 [2024-12-09 14:52:10.437669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:32.852 [2024-12-09 14:52:10.437677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.460466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.460583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:32.852 [2024-12-09 14:52:10.460602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.735 ms 00:19:32.852 [2024-12-09 14:52:10.460611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.483761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.483885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:32.852 [2024-12-09 14:52:10.483905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.104 ms 00:19:32.852 [2024-12-09 14:52:10.483912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.484497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.484513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.852 [2024-12-09 14:52:10.484524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:19:32.852 [2024-12-09 14:52:10.484532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.551983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.552017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:32.852 [2024-12-09 14:52:10.552034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.410 ms 00:19:32.852 [2024-12-09 14:52:10.552045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.576363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.576393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:32.852 [2024-12-09 14:52:10.576407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.232 ms 00:19:32.852 [2024-12-09 14:52:10.576415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.599513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.599628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:32.852 [2024-12-09 14:52:10.599647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.054 ms 00:19:32.852 [2024-12-09 14:52:10.599655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.623309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.623340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.852 [2024-12-09 14:52:10.623352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.617 ms 00:19:32.852 [2024-12-09 14:52:10.623360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.623411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.623420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.852 [2024-12-09 14:52:10.623435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:32.852 [2024-12-09 14:52:10.623442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.623529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.852 [2024-12-09 14:52:10.623540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.852 [2024-12-09 14:52:10.623551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:32.852 [2024-12-09 14:52:10.623558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.852 [2024-12-09 14:52:10.624555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3064.891 ms, result 0 00:19:32.852 { 00:19:32.852 "name": "ftl0", 00:19:32.852 "uuid": "ef982cf9-8f78-4859-9d6e-708e1b733c52" 00:19:32.852 } 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:32.852 14:52:10 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:33.110 [ 00:19:33.110 { 00:19:33.110 "name": "ftl0", 00:19:33.110 "aliases": [ 00:19:33.110 "ef982cf9-8f78-4859-9d6e-708e1b733c52" 00:19:33.110 ], 00:19:33.110 "product_name": "FTL disk", 00:19:33.110 "block_size": 4096, 00:19:33.110 "num_blocks": 20971520, 00:19:33.110 "uuid": "ef982cf9-8f78-4859-9d6e-708e1b733c52", 00:19:33.110 "assigned_rate_limits": { 00:19:33.110 "rw_ios_per_sec": 0, 00:19:33.110 "rw_mbytes_per_sec": 0, 00:19:33.110 "r_mbytes_per_sec": 0, 00:19:33.110 "w_mbytes_per_sec": 0 00:19:33.110 }, 00:19:33.110 "claimed": false, 00:19:33.110 "zoned": false, 00:19:33.110 "supported_io_types": { 00:19:33.110 "read": true, 00:19:33.110 "write": true, 00:19:33.110 "unmap": true, 00:19:33.110 "flush": true, 00:19:33.110 "reset": false, 00:19:33.110 "nvme_admin": false, 00:19:33.110 "nvme_io": false, 00:19:33.110 "nvme_io_md": false, 00:19:33.110 "write_zeroes": true, 00:19:33.110 "zcopy": false, 00:19:33.110 "get_zone_info": false, 00:19:33.110 "zone_management": false, 00:19:33.110 "zone_append": false, 00:19:33.110 "compare": false, 00:19:33.110 "compare_and_write": false, 00:19:33.110 "abort": false, 00:19:33.110 "seek_hole": false, 00:19:33.110 "seek_data": false, 00:19:33.110 "copy": false, 00:19:33.110 "nvme_iov_md": false 00:19:33.110 }, 00:19:33.110 "driver_specific": { 00:19:33.110 "ftl": { 00:19:33.110 "base_bdev": "1170fff3-8b25-4e7a-a2c2-d1c2ce6a0b9c", 00:19:33.110 "cache": "nvc0n1p0" 00:19:33.110 } 00:19:33.110 } 00:19:33.110 } 00:19:33.110 ] 00:19:33.110 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:33.110 14:52:11 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:33.111 14:52:11 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:33.111 14:52:11 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:33.111 14:52:11 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:33.369 [2024-12-09 14:52:11.401148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.401185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:33.369 [2024-12-09 14:52:11.401196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:33.369 [2024-12-09 14:52:11.401204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.401236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:33.369 [2024-12-09 14:52:11.403468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.403494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:33.369 [2024-12-09 14:52:11.403504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.216 ms 00:19:33.369 [2024-12-09 14:52:11.403511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.403883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.403898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:33.369 [2024-12-09 14:52:11.403908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:19:33.369 [2024-12-09 14:52:11.403913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.406349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.406460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:33.369 [2024-12-09 14:52:11.406475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:19:33.369 [2024-12-09 14:52:11.406481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.411207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.411231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:33.369 [2024-12-09 14:52:11.411242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:19:33.369 [2024-12-09 14:52:11.411248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.429779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.429896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:33.369 [2024-12-09 14:52:11.429922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.452 ms 00:19:33.369 [2024-12-09 14:52:11.429929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.442263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.442289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:33.369 [2024-12-09 14:52:11.442303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:19:33.369 [2024-12-09 14:52:11.442310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.442458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.442468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:33.369 [2024-12-09 14:52:11.442477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:33.369 [2024-12-09 14:52:11.442483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.459828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.459852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:33.369 [2024-12-09 14:52:11.459862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.323 ms 00:19:33.369 [2024-12-09 14:52:11.459868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.369 [2024-12-09 14:52:11.476846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.369 [2024-12-09 14:52:11.476947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:33.369 [2024-12-09 14:52:11.476963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.939 ms 00:19:33.369 [2024-12-09 14:52:11.476969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-12-09 14:52:11.494007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-12-09 14:52:11.494032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:33.629 [2024-12-09 14:52:11.494041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.003 ms 00:19:33.629 [2024-12-09 14:52:11.494047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-12-09 14:52:11.511103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-12-09 14:52:11.511127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:33.629 [2024-12-09 14:52:11.511137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.961 ms 00:19:33.629 [2024-12-09 14:52:11.511143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-12-09 14:52:11.511177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:33.629 [2024-12-09 14:52:11.511188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:33.629 [2024-12-09 14:52:11.511475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:33.630 [2024-12-09 14:52:11.511911] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:33.630 [2024-12-09 14:52:11.511919] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef982cf9-8f78-4859-9d6e-708e1b733c52 00:19:33.630 [2024-12-09 14:52:11.511925] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:33.630 [2024-12-09 14:52:11.511934] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:33.630 [2024-12-09 14:52:11.511940] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:33.630 [2024-12-09 14:52:11.511949] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:33.630 [2024-12-09 14:52:11.511955] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:33.630 [2024-12-09 14:52:11.511962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:33.630 [2024-12-09 14:52:11.511967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:33.630 [2024-12-09 14:52:11.511974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:33.630 [2024-12-09 14:52:11.511979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:33.630 [2024-12-09 14:52:11.511986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.630 [2024-12-09 14:52:11.511992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:33.630 [2024-12-09 14:52:11.512000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:19:33.630 [2024-12-09 14:52:11.512006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.630 [2024-12-09 14:52:11.522093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.630 [2024-12-09 14:52:11.522118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:33.630 [2024-12-09 14:52:11.522127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.055 ms 00:19:33.630 [2024-12-09 14:52:11.522133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.630 [2024-12-09 14:52:11.522424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.630 [2024-12-09 14:52:11.522431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:33.630 [2024-12-09 14:52:11.522440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:33.630 [2024-12-09 14:52:11.522446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.630 [2024-12-09 14:52:11.559280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.630 [2024-12-09 14:52:11.559388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.630 [2024-12-09 14:52:11.559404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.630 [2024-12-09 14:52:11.559410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.630 [2024-12-09 14:52:11.559469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.630 [2024-12-09 14:52:11.559475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.630 [2024-12-09 14:52:11.559484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.630 [2024-12-09 14:52:11.559490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.630 [2024-12-09 14:52:11.559560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.630 [2024-12-09 14:52:11.559571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.630 [2024-12-09 14:52:11.559580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.630 [2024-12-09 14:52:11.559586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.630 [2024-12-09 14:52:11.559612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.630 [2024-12-09 14:52:11.559619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.630 [2024-12-09 14:52:11.559626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.630 [2024-12-09 14:52:11.559632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.630 [2024-12-09 14:52:11.625044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.630 [2024-12-09 14:52:11.625080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.630 [2024-12-09 14:52:11.625091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.630 [2024-12-09 14:52:11.625098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.676350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.631 [2024-12-09 14:52:11.676385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.631 [2024-12-09 14:52:11.676395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.631 [2024-12-09 14:52:11.676402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.676497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.631 [2024-12-09 14:52:11.676506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.631 [2024-12-09 14:52:11.676517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.631 [2024-12-09 14:52:11.676523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.676582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.631 [2024-12-09 14:52:11.676590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.631 [2024-12-09 14:52:11.676598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.631 [2024-12-09 14:52:11.676603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.676699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.631 [2024-12-09 14:52:11.676708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.631 [2024-12-09 14:52:11.676717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.631 [2024-12-09 14:52:11.676724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.676765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.631 [2024-12-09 14:52:11.676773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:33.631 [2024-12-09 14:52:11.676781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.631 [2024-12-09 14:52:11.676787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.676845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.631 [2024-12-09 14:52:11.676853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.631 [2024-12-09 14:52:11.676863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.631 [2024-12-09 14:52:11.676871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.676920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.631 [2024-12-09 14:52:11.676928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.631 [2024-12-09 14:52:11.676935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.631 [2024-12-09 14:52:11.676941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.631 [2024-12-09 14:52:11.677099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.923 ms, result 0 00:19:33.631 true 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76579 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76579 ']' 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76579 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76579 00:19:33.631 killing process with pid 76579 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76579' 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76579 00:19:33.631 14:52:11 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76579 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:40.195 14:52:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:40.195 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:40.195 fio-3.35 00:19:40.195 Starting 1 thread 00:19:45.517 00:19:45.517 test: (groupid=0, jobs=1): err= 0: pid=76769: Mon Dec 9 14:52:23 2024 00:19:45.517 read: IOPS=789, BW=52.4MiB/s (55.0MB/s)(255MiB/4853msec) 00:19:45.517 slat (nsec): min=3937, max=43553, avg=7576.40, stdev=3634.65 00:19:45.517 clat (usec): min=285, max=3667, avg=567.07, stdev=205.29 00:19:45.517 lat (usec): min=290, max=3682, avg=574.65, stdev=206.56 00:19:45.517 clat percentiles (usec): 00:19:45.517 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 351], 00:19:45.517 | 30.00th=[ 453], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 562], 00:19:45.517 | 70.00th=[ 603], 80.00th=[ 807], 90.00th=[ 889], 95.00th=[ 938], 00:19:45.517 | 99.00th=[ 1020], 99.50th=[ 1090], 99.90th=[ 1352], 99.95th=[ 1483], 00:19:45.517 | 99.99th=[ 3654] 00:19:45.517 write: IOPS=795, BW=52.8MiB/s (55.4MB/s)(256MiB/4850msec); 0 zone resets 00:19:45.517 slat (nsec): min=14972, max=78268, avg=26190.85, stdev=6486.27 00:19:45.517 clat (usec): min=316, max=6713, avg=646.37, stdev=238.81 00:19:45.517 lat (usec): min=334, max=6738, avg=672.56, stdev=239.74 00:19:45.517 clat percentiles (usec): 00:19:45.517 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 429], 00:19:45.517 | 30.00th=[ 529], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 644], 00:19:45.517 | 70.00th=[ 693], 80.00th=[ 857], 90.00th=[ 979], 95.00th=[ 1020], 00:19:45.517 | 99.00th=[ 1254], 99.50th=[ 1336], 99.90th=[ 1647], 99.95th=[ 1860], 00:19:45.517 | 99.99th=[ 6718] 00:19:45.517 bw ( KiB/s): min=36992, max=65008, per=94.86%, avg=51287.11, stdev=9224.08, samples=9 00:19:45.517 iops : min= 544, max= 956, avg=754.22, stdev=135.65, samples=9 00:19:45.517 lat (usec) : 500=32.87%, 750=44.61%, 1000=18.68% 00:19:45.517 lat (msec) : 2=3.82%, 4=0.01%, 10=0.01% 00:19:45.517 cpu : usr=98.99%, sys=0.04%, ctx=9, majf=0, minf=1169 00:19:45.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.517 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.517 00:19:45.517 Run status group 0 (all jobs): 00:19:45.517 READ: bw=52.4MiB/s (55.0MB/s), 52.4MiB/s-52.4MiB/s (55.0MB/s-55.0MB/s), io=255MiB (267MB), run=4853-4853msec 00:19:45.517 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=256MiB (269MB), run=4850-4850msec 00:19:46.458 ----------------------------------------------------- 00:19:46.458 Suppressions used: 00:19:46.458 count bytes template 00:19:46.458 1 5 /usr/src/fio/parse.c 00:19:46.458 1 8 libtcmalloc_minimal.so 00:19:46.458 1 904 libcrypto.so 00:19:46.458 ----------------------------------------------------- 00:19:46.458 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:46.719 14:52:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:46.719 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:46.719 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:46.719 fio-3.35 00:19:46.719 Starting 2 threads 00:20:13.275 00:20:13.275 first_half: (groupid=0, jobs=1): err= 0: pid=76878: Mon Dec 9 14:52:49 2024 00:20:13.275 read: IOPS=2738, BW=10.7MiB/s (11.2MB/s)(256MiB/23909msec) 00:20:13.275 slat (nsec): min=3182, max=68275, avg=5449.92, stdev=1498.68 00:20:13.275 clat (usec): min=530, max=546635, avg=38407.82, stdev=29798.92 00:20:13.275 lat (usec): min=534, max=546644, avg=38413.27, stdev=29799.03 00:20:13.275 clat percentiles (msec): 00:20:13.275 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 31], 00:20:13.275 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:20:13.275 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 44], 95.00th=[ 79], 00:20:13.275 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 418], 99.95th=[ 489], 00:20:13.275 | 99.99th=[ 535] 00:20:13.275 write: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(256MiB/23880msec); 0 zone resets 00:20:13.275 slat (usec): min=3, max=1264, avg= 6.73, stdev= 9.29 00:20:13.275 clat (usec): min=371, max=47902, avg=8304.31, stdev=7717.29 00:20:13.275 lat (usec): min=377, max=47907, avg=8311.04, stdev=7717.84 00:20:13.275 clat percentiles (usec): 00:20:13.275 | 1.00th=[ 881], 5.00th=[ 1270], 10.00th=[ 1663], 20.00th=[ 3326], 00:20:13.275 | 30.00th=[ 4228], 40.00th=[ 5080], 50.00th=[ 5735], 60.00th=[ 6587], 00:20:13.275 | 70.00th=[ 8029], 80.00th=[12125], 90.00th=[18482], 95.00th=[22414], 00:20:13.275 | 99.00th=[39060], 99.50th=[41681], 99.90th=[44827], 99.95th=[45876], 00:20:13.275 | 99.99th=[47449] 00:20:13.275 bw ( KiB/s): min= 1760, max=40016, per=98.87%, avg=21706.33, stdev=13081.52, samples=24 00:20:13.275 iops : min= 440, max=10004, avg=5426.58, stdev=3270.38, samples=24 00:20:13.275 lat (usec) : 500=0.02%, 750=0.12%, 1000=0.82% 00:20:13.275 lat (msec) : 2=5.08%, 4=7.82%, 10=25.08%, 20=9.11%, 50=48.29% 00:20:13.275 lat (msec) : 100=1.65%, 250=1.93%, 500=0.05%, 750=0.02% 00:20:13.275 cpu : usr=99.24%, sys=0.15%, ctx=37, majf=0, minf=5532 00:20:13.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:13.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.275 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.275 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.275 second_half: (groupid=0, jobs=1): err= 0: pid=76879: Mon Dec 9 14:52:49 2024 00:20:13.275 read: IOPS=2760, BW=10.8MiB/s (11.3MB/s)(256MiB/23725msec) 00:20:13.275 slat (nsec): min=3172, max=47228, avg=5476.40, stdev=1677.25 00:20:13.275 clat (msec): min=13, max=337, avg=39.28, stdev=27.42 00:20:13.275 lat (msec): min=13, max=337, avg=39.29, stdev=27.42 00:20:13.275 clat percentiles (msec): 00:20:13.275 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:20:13.275 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 33], 00:20:13.275 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 45], 95.00th=[ 83], 00:20:13.275 | 99.00th=[ 180], 99.50th=[ 205], 99.90th=[ 271], 99.95th=[ 279], 00:20:13.275 | 99.99th=[ 326] 00:20:13.275 write: IOPS=2775, BW=10.8MiB/s (11.4MB/s)(256MiB/23613msec); 0 zone resets 00:20:13.275 slat (usec): min=3, max=3518, avg= 7.22, stdev=28.33 00:20:13.275 clat (usec): min=338, max=42019, avg=7065.09, stdev=5445.80 00:20:13.275 lat (usec): min=343, max=42024, avg=7072.30, stdev=5447.78 00:20:13.275 clat percentiles (usec): 00:20:13.275 | 1.00th=[ 938], 5.00th=[ 1991], 10.00th=[ 2704], 20.00th=[ 3654], 00:20:13.275 | 30.00th=[ 4293], 40.00th=[ 4948], 50.00th=[ 5407], 60.00th=[ 5866], 00:20:13.275 | 70.00th=[ 6456], 80.00th=[ 7767], 90.00th=[17171], 95.00th=[19530], 00:20:13.275 | 99.00th=[24249], 99.50th=[26608], 99.90th=[39060], 99.95th=[40633], 00:20:13.275 | 99.99th=[41681] 00:20:13.275 bw ( KiB/s): min= 824, max=47704, per=100.00%, avg=22625.04, stdev=14140.13, samples=23 00:20:13.275 iops : min= 206, max=11926, avg=5656.26, stdev=3535.03, samples=23 00:20:13.275 lat (usec) : 500=0.03%, 750=0.18%, 1000=0.38% 00:20:13.275 lat (msec) : 2=1.92%, 4=10.35%, 10=28.69%, 20=6.54%, 50=48.08% 00:20:13.275 lat (msec) : 100=1.75%, 250=1.93%, 500=0.14% 00:20:13.275 cpu : usr=98.94%, sys=0.27%, ctx=30, majf=0, minf=5573 00:20:13.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:13.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.275 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.275 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.275 00:20:13.275 Run status group 0 (all jobs): 00:20:13.275 READ: bw=21.4MiB/s (22.4MB/s), 10.7MiB/s-10.8MiB/s (11.2MB/s-11.3MB/s), io=512MiB (536MB), run=23725-23909msec 00:20:13.275 WRITE: bw=21.4MiB/s (22.5MB/s), 10.7MiB/s-10.8MiB/s (11.2MB/s-11.4MB/s), io=512MiB (537MB), run=23613-23880msec 00:20:13.536 ----------------------------------------------------- 00:20:13.536 Suppressions used: 00:20:13.536 count bytes template 00:20:13.536 2 10 /usr/src/fio/parse.c 00:20:13.536 2 192 /usr/src/fio/iolog.c 00:20:13.536 1 8 libtcmalloc_minimal.so 00:20:13.536 1 904 libcrypto.so 00:20:13.536 ----------------------------------------------------- 00:20:13.536 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:13.536 14:52:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:13.797 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:13.797 fio-3.35 00:20:13.797 Starting 1 thread 00:20:31.900 00:20:31.900 test: (groupid=0, jobs=1): err= 0: pid=77192: Mon Dec 9 14:53:07 2024 00:20:31.900 read: IOPS=7752, BW=30.3MiB/s (31.8MB/s)(255MiB/8410msec) 00:20:31.900 slat (nsec): min=3125, max=30934, avg=5013.47, stdev=1199.16 00:20:31.900 clat (usec): min=540, max=32102, avg=16501.38, stdev=2128.47 00:20:31.900 lat (usec): min=544, max=32108, avg=16506.39, stdev=2128.49 00:20:31.900 clat percentiles (usec): 00:20:31.900 | 1.00th=[13829], 5.00th=[14091], 10.00th=[15139], 20.00th=[15533], 00:20:31.900 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16057], 60.00th=[16188], 00:20:31.900 | 70.00th=[16450], 80.00th=[16712], 90.00th=[18220], 95.00th=[22414], 00:20:31.900 | 99.00th=[24773], 99.50th=[25297], 99.90th=[25822], 99.95th=[28181], 00:20:31.900 | 99.99th=[31327] 00:20:31.900 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(256MiB/5590msec); 0 zone resets 00:20:31.900 slat (usec): min=4, max=152, avg= 7.90, stdev= 3.63 00:20:31.900 clat (usec): min=472, max=47465, avg=10873.44, stdev=10913.41 00:20:31.900 lat (usec): min=477, max=47471, avg=10881.33, stdev=10913.74 00:20:31.900 clat percentiles (usec): 00:20:31.900 | 1.00th=[ 660], 5.00th=[ 766], 10.00th=[ 857], 20.00th=[ 1020], 00:20:31.900 | 30.00th=[ 1221], 40.00th=[ 1614], 50.00th=[ 8586], 60.00th=[11994], 00:20:31.900 | 70.00th=[15008], 80.00th=[18482], 90.00th=[28443], 95.00th=[34341], 00:20:31.900 | 99.00th=[38011], 99.50th=[39584], 99.90th=[44303], 99.95th=[44827], 00:20:31.900 | 99.99th=[46400] 00:20:31.900 bw ( KiB/s): min=10096, max=64936, per=93.17%, avg=43690.67, stdev=15412.29, samples=12 00:20:31.900 iops : min= 2524, max=16234, avg=10922.67, stdev=3853.07, samples=12 00:20:31.900 lat (usec) : 500=0.01%, 750=2.11%, 1000=7.40% 00:20:31.900 lat (msec) : 2=11.05%, 4=0.56%, 10=5.75%, 20=60.91%, 50=12.22% 00:20:31.900 cpu : usr=99.03%, sys=0.20%, ctx=25, majf=0, minf=5565 00:20:31.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:31.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.900 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:31.900 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:31.900 00:20:31.900 Run status group 0 (all jobs): 00:20:31.900 READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=255MiB (267MB), run=8410-8410msec 00:20:31.900 WRITE: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=256MiB (268MB), run=5590-5590msec 00:20:31.900 ----------------------------------------------------- 00:20:31.900 Suppressions used: 00:20:31.900 count bytes template 00:20:31.900 1 5 /usr/src/fio/parse.c 00:20:31.900 2 192 /usr/src/fio/iolog.c 00:20:31.900 1 8 libtcmalloc_minimal.so 00:20:31.900 1 904 libcrypto.so 00:20:31.900 ----------------------------------------------------- 00:20:31.900 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:31.900 Remove shared memory files 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:31.900 14:53:08 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:31.900 14:53:09 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58402 /dev/shm/spdk_tgt_trace.pid75494 00:20:31.900 14:53:09 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:31.900 14:53:09 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:31.900 ************************************ 00:20:31.900 END TEST ftl_fio_basic 00:20:31.900 ************************************ 00:20:31.900 00:20:31.900 real 1m5.492s 00:20:31.900 user 2m22.843s 00:20:31.900 sys 0m3.233s 00:20:31.900 14:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.900 14:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:31.900 14:53:09 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:31.900 14:53:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:31.900 14:53:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.900 14:53:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:31.900 ************************************ 00:20:31.900 START TEST ftl_bdevperf 00:20:31.900 ************************************ 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:31.900 * Looking for test storage... 00:20:31.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.900 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.901 --rc genhtml_branch_coverage=1 00:20:31.901 --rc genhtml_function_coverage=1 00:20:31.901 --rc genhtml_legend=1 00:20:31.901 --rc geninfo_all_blocks=1 00:20:31.901 --rc geninfo_unexecuted_blocks=1 00:20:31.901 00:20:31.901 ' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.901 --rc genhtml_branch_coverage=1 00:20:31.901 --rc genhtml_function_coverage=1 00:20:31.901 --rc genhtml_legend=1 00:20:31.901 --rc geninfo_all_blocks=1 00:20:31.901 --rc geninfo_unexecuted_blocks=1 00:20:31.901 00:20:31.901 ' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.901 --rc genhtml_branch_coverage=1 00:20:31.901 --rc genhtml_function_coverage=1 00:20:31.901 --rc genhtml_legend=1 00:20:31.901 --rc geninfo_all_blocks=1 00:20:31.901 --rc geninfo_unexecuted_blocks=1 00:20:31.901 00:20:31.901 ' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:31.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.901 --rc genhtml_branch_coverage=1 00:20:31.901 --rc genhtml_function_coverage=1 00:20:31.901 --rc genhtml_legend=1 00:20:31.901 --rc geninfo_all_blocks=1 00:20:31.901 --rc geninfo_unexecuted_blocks=1 00:20:31.901 00:20:31.901 ' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77437 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77437 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77437 ']' 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.901 14:53:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:31.901 [2024-12-09 14:53:09.333707] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:20:31.901 [2024-12-09 14:53:09.334037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77437 ] 00:20:31.901 [2024-12-09 14:53:09.494880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.901 [2024-12-09 14:53:09.582639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:32.159 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:32.420 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:32.681 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:32.681 { 00:20:32.681 "name": "nvme0n1", 00:20:32.681 "aliases": [ 00:20:32.681 "c817ae2b-b8e4-4ed0-b1fc-620a896b216d" 00:20:32.681 ], 00:20:32.681 "product_name": "NVMe disk", 00:20:32.681 "block_size": 4096, 00:20:32.681 "num_blocks": 1310720, 00:20:32.681 "uuid": "c817ae2b-b8e4-4ed0-b1fc-620a896b216d", 00:20:32.681 "numa_id": -1, 00:20:32.681 "assigned_rate_limits": { 00:20:32.681 "rw_ios_per_sec": 0, 00:20:32.681 "rw_mbytes_per_sec": 0, 00:20:32.681 "r_mbytes_per_sec": 0, 00:20:32.681 "w_mbytes_per_sec": 0 00:20:32.681 }, 00:20:32.681 "claimed": true, 00:20:32.681 "claim_type": "read_many_write_one", 00:20:32.681 "zoned": false, 00:20:32.681 "supported_io_types": { 00:20:32.681 "read": true, 00:20:32.681 "write": true, 00:20:32.681 "unmap": true, 00:20:32.681 "flush": true, 00:20:32.681 "reset": true, 00:20:32.681 "nvme_admin": true, 00:20:32.681 "nvme_io": true, 00:20:32.681 "nvme_io_md": false, 00:20:32.681 "write_zeroes": true, 00:20:32.681 "zcopy": false, 00:20:32.681 "get_zone_info": false, 00:20:32.681 "zone_management": false, 00:20:32.681 "zone_append": false, 00:20:32.681 "compare": true, 00:20:32.681 "compare_and_write": false, 00:20:32.681 "abort": true, 00:20:32.681 "seek_hole": false, 00:20:32.681 "seek_data": false, 00:20:32.681 "copy": true, 00:20:32.681 "nvme_iov_md": false 00:20:32.681 }, 00:20:32.681 "driver_specific": { 00:20:32.682 "nvme": [ 00:20:32.682 { 00:20:32.682 "pci_address": "0000:00:11.0", 00:20:32.682 "trid": { 00:20:32.682 "trtype": "PCIe", 00:20:32.682 "traddr": "0000:00:11.0" 00:20:32.682 }, 00:20:32.682 "ctrlr_data": { 00:20:32.682 "cntlid": 0, 00:20:32.682 "vendor_id": "0x1b36", 00:20:32.682 "model_number": "QEMU NVMe Ctrl", 00:20:32.682 "serial_number": "12341", 00:20:32.682 "firmware_revision": "8.0.0", 00:20:32.682 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:32.682 "oacs": { 00:20:32.682 "security": 0, 00:20:32.682 "format": 1, 00:20:32.682 "firmware": 0, 00:20:32.682 "ns_manage": 1 00:20:32.682 }, 00:20:32.682 "multi_ctrlr": false, 00:20:32.682 "ana_reporting": false 00:20:32.682 }, 00:20:32.682 "vs": { 00:20:32.682 "nvme_version": "1.4" 00:20:32.682 }, 00:20:32.682 "ns_data": { 00:20:32.682 "id": 1, 00:20:32.682 "can_share": false 00:20:32.682 } 00:20:32.682 } 00:20:32.682 ], 00:20:32.682 "mp_policy": "active_passive" 00:20:32.682 } 00:20:32.682 } 00:20:32.682 ]' 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:32.682 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:32.944 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=9770be25-1d21-48e5-a634-4c01a7baefbb 00:20:32.944 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:32.944 14:53:10 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9770be25-1d21-48e5-a634-4c01a7baefbb 00:20:33.205 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:33.466 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6b84eef2-0b56-4bd6-be86-6c9fcfdcda9d 00:20:33.466 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6b84eef2-0b56-4bd6-be86-6c9fcfdcda9d 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:33.728 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:33.988 { 00:20:33.988 "name": "c2314139-d95a-4a5b-bc1f-adae1049abd6", 00:20:33.988 "aliases": [ 00:20:33.988 "lvs/nvme0n1p0" 00:20:33.988 ], 00:20:33.988 "product_name": "Logical Volume", 00:20:33.988 "block_size": 4096, 00:20:33.988 "num_blocks": 26476544, 00:20:33.988 "uuid": "c2314139-d95a-4a5b-bc1f-adae1049abd6", 00:20:33.988 "assigned_rate_limits": { 00:20:33.988 "rw_ios_per_sec": 0, 00:20:33.988 "rw_mbytes_per_sec": 0, 00:20:33.988 "r_mbytes_per_sec": 0, 00:20:33.988 "w_mbytes_per_sec": 0 00:20:33.988 }, 00:20:33.988 "claimed": false, 00:20:33.988 "zoned": false, 00:20:33.988 "supported_io_types": { 00:20:33.988 "read": true, 00:20:33.988 "write": true, 00:20:33.988 "unmap": true, 00:20:33.988 "flush": false, 00:20:33.988 "reset": true, 00:20:33.988 "nvme_admin": false, 00:20:33.988 "nvme_io": false, 00:20:33.988 "nvme_io_md": false, 00:20:33.988 "write_zeroes": true, 00:20:33.988 "zcopy": false, 00:20:33.988 "get_zone_info": false, 00:20:33.988 "zone_management": false, 00:20:33.988 "zone_append": false, 00:20:33.988 "compare": false, 00:20:33.988 "compare_and_write": false, 00:20:33.988 "abort": false, 00:20:33.988 "seek_hole": true, 00:20:33.988 "seek_data": true, 00:20:33.988 "copy": false, 00:20:33.988 "nvme_iov_md": false 00:20:33.988 }, 00:20:33.988 "driver_specific": { 00:20:33.988 "lvol": { 00:20:33.988 "lvol_store_uuid": "6b84eef2-0b56-4bd6-be86-6c9fcfdcda9d", 00:20:33.988 "base_bdev": "nvme0n1", 00:20:33.988 "thin_provision": true, 00:20:33.988 "num_allocated_clusters": 0, 00:20:33.988 "snapshot": false, 00:20:33.988 "clone": false, 00:20:33.988 "esnap_clone": false 00:20:33.988 } 00:20:33.988 } 00:20:33.988 } 00:20:33.988 ]' 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:33.988 14:53:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:34.248 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:34.509 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:34.509 { 00:20:34.509 "name": "c2314139-d95a-4a5b-bc1f-adae1049abd6", 00:20:34.509 "aliases": [ 00:20:34.509 "lvs/nvme0n1p0" 00:20:34.509 ], 00:20:34.509 "product_name": "Logical Volume", 00:20:34.509 "block_size": 4096, 00:20:34.509 "num_blocks": 26476544, 00:20:34.509 "uuid": "c2314139-d95a-4a5b-bc1f-adae1049abd6", 00:20:34.509 "assigned_rate_limits": { 00:20:34.509 "rw_ios_per_sec": 0, 00:20:34.509 "rw_mbytes_per_sec": 0, 00:20:34.509 "r_mbytes_per_sec": 0, 00:20:34.509 "w_mbytes_per_sec": 0 00:20:34.509 }, 00:20:34.509 "claimed": false, 00:20:34.509 "zoned": false, 00:20:34.509 "supported_io_types": { 00:20:34.509 "read": true, 00:20:34.509 "write": true, 00:20:34.509 "unmap": true, 00:20:34.509 "flush": false, 00:20:34.509 "reset": true, 00:20:34.509 "nvme_admin": false, 00:20:34.509 "nvme_io": false, 00:20:34.509 "nvme_io_md": false, 00:20:34.509 "write_zeroes": true, 00:20:34.509 "zcopy": false, 00:20:34.509 "get_zone_info": false, 00:20:34.509 "zone_management": false, 00:20:34.509 "zone_append": false, 00:20:34.509 "compare": false, 00:20:34.509 "compare_and_write": false, 00:20:34.509 "abort": false, 00:20:34.509 "seek_hole": true, 00:20:34.509 "seek_data": true, 00:20:34.509 "copy": false, 00:20:34.509 "nvme_iov_md": false 00:20:34.509 }, 00:20:34.509 "driver_specific": { 00:20:34.509 "lvol": { 00:20:34.509 "lvol_store_uuid": "6b84eef2-0b56-4bd6-be86-6c9fcfdcda9d", 00:20:34.509 "base_bdev": "nvme0n1", 00:20:34.509 "thin_provision": true, 00:20:34.509 "num_allocated_clusters": 0, 00:20:34.509 "snapshot": false, 00:20:34.509 "clone": false, 00:20:34.509 "esnap_clone": false 00:20:34.509 } 00:20:34.509 } 00:20:34.509 } 00:20:34.509 ]' 00:20:34.509 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:34.509 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:34.509 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:34.510 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:34.510 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:34.510 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:34.510 14:53:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:34.510 14:53:12 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2314139-d95a-4a5b-bc1f-adae1049abd6 00:20:34.772 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:34.772 { 00:20:34.772 "name": "c2314139-d95a-4a5b-bc1f-adae1049abd6", 00:20:34.772 "aliases": [ 00:20:34.772 "lvs/nvme0n1p0" 00:20:34.772 ], 00:20:34.772 "product_name": "Logical Volume", 00:20:34.772 "block_size": 4096, 00:20:34.772 "num_blocks": 26476544, 00:20:34.772 "uuid": "c2314139-d95a-4a5b-bc1f-adae1049abd6", 00:20:34.772 "assigned_rate_limits": { 00:20:34.772 "rw_ios_per_sec": 0, 00:20:34.772 "rw_mbytes_per_sec": 0, 00:20:34.772 "r_mbytes_per_sec": 0, 00:20:34.772 "w_mbytes_per_sec": 0 00:20:34.772 }, 00:20:34.772 "claimed": false, 00:20:34.772 "zoned": false, 00:20:34.772 "supported_io_types": { 00:20:34.772 "read": true, 00:20:34.772 "write": true, 00:20:34.772 "unmap": true, 00:20:34.772 "flush": false, 00:20:34.772 "reset": true, 00:20:34.772 "nvme_admin": false, 00:20:34.772 "nvme_io": false, 00:20:34.772 "nvme_io_md": false, 00:20:34.772 "write_zeroes": true, 00:20:34.772 "zcopy": false, 00:20:34.772 "get_zone_info": false, 00:20:34.772 "zone_management": false, 00:20:34.772 "zone_append": false, 00:20:34.772 "compare": false, 00:20:34.772 "compare_and_write": false, 00:20:34.772 "abort": false, 00:20:34.772 "seek_hole": true, 00:20:34.772 "seek_data": true, 00:20:34.772 "copy": false, 00:20:34.772 "nvme_iov_md": false 00:20:34.772 }, 00:20:34.772 "driver_specific": { 00:20:34.772 "lvol": { 00:20:34.772 "lvol_store_uuid": "6b84eef2-0b56-4bd6-be86-6c9fcfdcda9d", 00:20:34.772 "base_bdev": "nvme0n1", 00:20:34.772 "thin_provision": true, 00:20:34.772 "num_allocated_clusters": 0, 00:20:34.772 "snapshot": false, 00:20:34.772 "clone": false, 00:20:34.772 "esnap_clone": false 00:20:34.772 } 00:20:34.772 } 00:20:34.772 } 00:20:34.772 ]' 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:35.033 14:53:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c2314139-d95a-4a5b-bc1f-adae1049abd6 -c nvc0n1p0 --l2p_dram_limit 20 00:20:35.294 [2024-12-09 14:53:13.155544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.155624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.294 [2024-12-09 14:53:13.155643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:35.294 [2024-12-09 14:53:13.155655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.155746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.155760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.294 [2024-12-09 14:53:13.155771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:35.294 [2024-12-09 14:53:13.155782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.155823] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.294 [2024-12-09 14:53:13.156756] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.294 [2024-12-09 14:53:13.156790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.156824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.294 [2024-12-09 14:53:13.156841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:20:35.294 [2024-12-09 14:53:13.156853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.156893] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 65180ac7-6499-4ec6-9b67-63923e4bcfb5 00:20:35.294 [2024-12-09 14:53:13.159282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.159335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:35.294 [2024-12-09 14:53:13.159355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:35.294 [2024-12-09 14:53:13.159365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.172179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.172435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.294 [2024-12-09 14:53:13.172462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.735 ms 00:20:35.294 [2024-12-09 14:53:13.172475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.172657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.172671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.294 [2024-12-09 14:53:13.172689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:35.294 [2024-12-09 14:53:13.172698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.172763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.172776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.294 [2024-12-09 14:53:13.172787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.294 [2024-12-09 14:53:13.172795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.172864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.294 [2024-12-09 14:53:13.177956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.178008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.294 [2024-12-09 14:53:13.178021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:20:35.294 [2024-12-09 14:53:13.178037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.178085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.178097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.294 [2024-12-09 14:53:13.178107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:35.294 [2024-12-09 14:53:13.178117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.178154] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:35.294 [2024-12-09 14:53:13.178331] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.294 [2024-12-09 14:53:13.178347] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.294 [2024-12-09 14:53:13.178364] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:35.294 [2024-12-09 14:53:13.178376] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.294 [2024-12-09 14:53:13.178389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.294 [2024-12-09 14:53:13.178398] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:35.294 [2024-12-09 14:53:13.178409] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.294 [2024-12-09 14:53:13.178418] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.294 [2024-12-09 14:53:13.178430] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.294 [2024-12-09 14:53:13.178441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.294 [2024-12-09 14:53:13.178452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.294 [2024-12-09 14:53:13.178463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:35.294 [2024-12-09 14:53:13.178473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.294 [2024-12-09 14:53:13.178560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.295 [2024-12-09 14:53:13.178573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.295 [2024-12-09 14:53:13.178582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:35.295 [2024-12-09 14:53:13.178595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.295 [2024-12-09 14:53:13.178688] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.295 [2024-12-09 14:53:13.178704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.295 [2024-12-09 14:53:13.178713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.295 [2024-12-09 14:53:13.178725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.295 [2024-12-09 14:53:13.178734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.295 [2024-12-09 14:53:13.178743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.295 [2024-12-09 14:53:13.178749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:35.295 [2024-12-09 14:53:13.178761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.295 [2024-12-09 14:53:13.178770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:35.295 [2024-12-09 14:53:13.178780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.295 [2024-12-09 14:53:13.178786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.295 [2024-12-09 14:53:13.178845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:35.295 [2024-12-09 14:53:13.178853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.295 [2024-12-09 14:53:13.178865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.295 [2024-12-09 14:53:13.178873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:35.295 [2024-12-09 14:53:13.178886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.295 [2024-12-09 14:53:13.178893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.295 [2024-12-09 14:53:13.178903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:35.295 [2024-12-09 14:53:13.178911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.295 [2024-12-09 14:53:13.178923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.295 [2024-12-09 14:53:13.178938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:35.295 [2024-12-09 14:53:13.178948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.295 [2024-12-09 14:53:13.178955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.295 [2024-12-09 14:53:13.178990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:35.295 [2024-12-09 14:53:13.178997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.295 [2024-12-09 14:53:13.179007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.295 [2024-12-09 14:53:13.179014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:35.295 [2024-12-09 14:53:13.179024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.295 [2024-12-09 14:53:13.179030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.295 [2024-12-09 14:53:13.179040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:35.295 [2024-12-09 14:53:13.179046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.295 [2024-12-09 14:53:13.179060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.295 [2024-12-09 14:53:13.179068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:35.295 [2024-12-09 14:53:13.179077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.295 [2024-12-09 14:53:13.179083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.295 [2024-12-09 14:53:13.179092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:35.295 [2024-12-09 14:53:13.179100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.295 [2024-12-09 14:53:13.179112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.295 [2024-12-09 14:53:13.179120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:35.295 [2024-12-09 14:53:13.179130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.295 [2024-12-09 14:53:13.179143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.295 [2024-12-09 14:53:13.179153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:35.295 [2024-12-09 14:53:13.179160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.295 [2024-12-09 14:53:13.179169] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.295 [2024-12-09 14:53:13.179177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.295 [2024-12-09 14:53:13.179188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.295 [2024-12-09 14:53:13.179196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.295 [2024-12-09 14:53:13.179209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.295 [2024-12-09 14:53:13.179216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.295 [2024-12-09 14:53:13.179225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.295 [2024-12-09 14:53:13.179233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.295 [2024-12-09 14:53:13.179243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.295 [2024-12-09 14:53:13.179255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.295 [2024-12-09 14:53:13.179267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.295 [2024-12-09 14:53:13.179280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.295 [2024-12-09 14:53:13.179293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:35.295 [2024-12-09 14:53:13.179300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:35.295 [2024-12-09 14:53:13.179309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:35.295 [2024-12-09 14:53:13.179316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:35.295 [2024-12-09 14:53:13.179326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:35.295 [2024-12-09 14:53:13.179335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:35.295 [2024-12-09 14:53:13.179344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:35.295 [2024-12-09 14:53:13.179351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:35.295 [2024-12-09 14:53:13.179365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:35.295 [2024-12-09 14:53:13.179372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:35.295 [2024-12-09 14:53:13.179382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:35.295 [2024-12-09 14:53:13.179390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:35.295 [2024-12-09 14:53:13.179399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:35.295 [2024-12-09 14:53:13.179407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:35.295 [2024-12-09 14:53:13.179417] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.295 [2024-12-09 14:53:13.179425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.295 [2024-12-09 14:53:13.179439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.295 [2024-12-09 14:53:13.179446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.295 [2024-12-09 14:53:13.179456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.295 [2024-12-09 14:53:13.179464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.295 [2024-12-09 14:53:13.179476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.295 [2024-12-09 14:53:13.179483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.295 [2024-12-09 14:53:13.179494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:20:35.295 [2024-12-09 14:53:13.179501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.295 [2024-12-09 14:53:13.179542] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:35.295 [2024-12-09 14:53:13.179553] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:39.505 [2024-12-09 14:53:16.891272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.891332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:39.505 [2024-12-09 14:53:16.891346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3711.719 ms 00:20:39.505 [2024-12-09 14:53:16.891354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.915066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.915230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.505 [2024-12-09 14:53:16.915250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.536 ms 00:20:39.505 [2024-12-09 14:53:16.915257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.915365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.915374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.505 [2024-12-09 14:53:16.915386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:39.505 [2024-12-09 14:53:16.915392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.961180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.961313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.505 [2024-12-09 14:53:16.961333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.761 ms 00:20:39.505 [2024-12-09 14:53:16.961341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.961371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.961379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.505 [2024-12-09 14:53:16.961387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:39.505 [2024-12-09 14:53:16.961395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.961836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.961854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.505 [2024-12-09 14:53:16.961864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:20:39.505 [2024-12-09 14:53:16.961872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.961961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.961969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.505 [2024-12-09 14:53:16.961980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:39.505 [2024-12-09 14:53:16.961987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.973929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.973955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.505 [2024-12-09 14:53:16.973966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.926 ms 00:20:39.505 [2024-12-09 14:53:16.973979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:16.983854] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:39.505 [2024-12-09 14:53:16.989585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:16.989615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.505 [2024-12-09 14:53:16.989624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.552 ms 00:20:39.505 [2024-12-09 14:53:16.989632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.069368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.069404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:39.505 [2024-12-09 14:53:17.069414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.717 ms 00:20:39.505 [2024-12-09 14:53:17.069421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.069570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.069584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.505 [2024-12-09 14:53:17.069591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:20:39.505 [2024-12-09 14:53:17.069600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.088525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.088556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:39.505 [2024-12-09 14:53:17.088565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.902 ms 00:20:39.505 [2024-12-09 14:53:17.088573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.106201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.106230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:39.505 [2024-12-09 14:53:17.106239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.600 ms 00:20:39.505 [2024-12-09 14:53:17.106247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.106685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.106696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.505 [2024-12-09 14:53:17.106703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:20:39.505 [2024-12-09 14:53:17.106711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.171484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.171518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:39.505 [2024-12-09 14:53:17.171527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.750 ms 00:20:39.505 [2024-12-09 14:53:17.171536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.191758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.191792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:39.505 [2024-12-09 14:53:17.191816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.171 ms 00:20:39.505 [2024-12-09 14:53:17.191825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.210437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.210564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:39.505 [2024-12-09 14:53:17.210578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.584 ms 00:20:39.505 [2024-12-09 14:53:17.210585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.229743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.229876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.505 [2024-12-09 14:53:17.229890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.133 ms 00:20:39.505 [2024-12-09 14:53:17.229898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.229926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.229938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.505 [2024-12-09 14:53:17.229945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.505 [2024-12-09 14:53:17.229953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.230017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-12-09 14:53:17.230027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.505 [2024-12-09 14:53:17.230034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:39.505 [2024-12-09 14:53:17.230041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-12-09 14:53:17.230889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4074.970 ms, result 0 00:20:39.505 { 00:20:39.505 "name": "ftl0", 00:20:39.505 "uuid": "65180ac7-6499-4ec6-9b67-63923e4bcfb5" 00:20:39.505 } 00:20:39.505 14:53:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:39.505 14:53:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:39.505 14:53:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:39.505 14:53:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:39.505 [2024-12-09 14:53:17.539030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:39.505 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:39.505 Zero copy mechanism will not be used. 00:20:39.505 Running I/O for 4 seconds... 00:20:41.836 747.00 IOPS, 49.61 MiB/s [2024-12-09T14:53:20.901Z] 794.00 IOPS, 52.73 MiB/s [2024-12-09T14:53:21.846Z] 783.67 IOPS, 52.04 MiB/s [2024-12-09T14:53:21.846Z] 780.75 IOPS, 51.85 MiB/s 00:20:43.724 Latency(us) 00:20:43.724 [2024-12-09T14:53:21.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.724 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:43.724 ftl0 : 4.00 780.60 51.84 0.00 0.00 1367.01 335.56 2495.41 00:20:43.724 [2024-12-09T14:53:21.846Z] =================================================================================================================== 00:20:43.724 [2024-12-09T14:53:21.846Z] Total : 780.60 51.84 0.00 0.00 1367.01 335.56 2495.41 00:20:43.724 [2024-12-09 14:53:21.547718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:43.724 { 00:20:43.724 "results": [ 00:20:43.724 { 00:20:43.724 "job": "ftl0", 00:20:43.724 "core_mask": "0x1", 00:20:43.724 "workload": "randwrite", 00:20:43.724 "status": "finished", 00:20:43.724 "queue_depth": 1, 00:20:43.724 "io_size": 69632, 00:20:43.724 "runtime": 4.002051, 00:20:43.724 "iops": 780.5997474794799, 00:20:43.724 "mibps": 51.836701981059214, 00:20:43.724 "io_failed": 0, 00:20:43.724 "io_timeout": 0, 00:20:43.724 "avg_latency_us": 1367.0082300797794, 00:20:43.724 "min_latency_us": 335.55692307692306, 00:20:43.724 "max_latency_us": 2495.409230769231 00:20:43.724 } 00:20:43.724 ], 00:20:43.724 "core_count": 1 00:20:43.724 } 00:20:43.724 14:53:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:43.724 [2024-12-09 14:53:21.656740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:43.724 Running I/O for 4 seconds... 00:20:45.612 6457.00 IOPS, 25.22 MiB/s [2024-12-09T14:53:24.679Z] 5642.00 IOPS, 22.04 MiB/s [2024-12-09T14:53:26.113Z] 5230.67 IOPS, 20.43 MiB/s [2024-12-09T14:53:26.113Z] 5026.50 IOPS, 19.63 MiB/s 00:20:47.991 Latency(us) 00:20:47.991 [2024-12-09T14:53:26.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.991 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:47.991 ftl0 : 4.04 5013.11 19.58 0.00 0.00 25427.51 437.96 49404.06 00:20:47.991 [2024-12-09T14:53:26.113Z] =================================================================================================================== 00:20:47.991 [2024-12-09T14:53:26.113Z] Total : 5013.11 19.58 0.00 0.00 25427.51 0.00 49404.06 00:20:47.991 [2024-12-09 14:53:25.701447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:47.991 { 00:20:47.991 "results": [ 00:20:47.991 { 00:20:47.991 "job": "ftl0", 00:20:47.991 "core_mask": "0x1", 00:20:47.991 "workload": "randwrite", 00:20:47.991 "status": "finished", 00:20:47.991 "queue_depth": 128, 00:20:47.991 "io_size": 4096, 00:20:47.991 "runtime": 4.035817, 00:20:47.991 "iops": 5013.1113477147255, 00:20:47.991 "mibps": 19.582466202010647, 00:20:47.991 "io_failed": 0, 00:20:47.991 "io_timeout": 0, 00:20:47.991 "avg_latency_us": 25427.51469537975, 00:20:47.991 "min_latency_us": 437.9569230769231, 00:20:47.991 "max_latency_us": 49404.06153846154 00:20:47.991 } 00:20:47.991 ], 00:20:47.991 "core_count": 1 00:20:47.991 } 00:20:47.991 14:53:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:47.991 [2024-12-09 14:53:25.822039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:47.991 Running I/O for 4 seconds... 00:20:49.884 4328.00 IOPS, 16.91 MiB/s [2024-12-09T14:53:28.950Z] 4276.00 IOPS, 16.70 MiB/s [2024-12-09T14:53:29.895Z] 4211.67 IOPS, 16.45 MiB/s [2024-12-09T14:53:29.895Z] 4203.75 IOPS, 16.42 MiB/s 00:20:51.773 Latency(us) 00:20:51.773 [2024-12-09T14:53:29.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.773 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:51.773 Verification LBA range: start 0x0 length 0x1400000 00:20:51.773 ftl0 : 4.02 4214.43 16.46 0.00 0.00 30271.66 513.58 43757.88 00:20:51.773 [2024-12-09T14:53:29.895Z] =================================================================================================================== 00:20:51.773 [2024-12-09T14:53:29.895Z] Total : 4214.43 16.46 0.00 0.00 30271.66 0.00 43757.88 00:20:51.773 [2024-12-09 14:53:29.859394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:51.773 { 00:20:51.773 "results": [ 00:20:51.773 { 00:20:51.773 "job": "ftl0", 00:20:51.773 "core_mask": "0x1", 00:20:51.773 "workload": "verify", 00:20:51.774 "status": "finished", 00:20:51.774 "verify_range": { 00:20:51.774 "start": 0, 00:20:51.774 "length": 20971520 00:20:51.774 }, 00:20:51.774 "queue_depth": 128, 00:20:51.774 "io_size": 4096, 00:20:51.774 "runtime": 4.020001, 00:20:51.774 "iops": 4214.426812331639, 00:20:51.774 "mibps": 16.462604735670464, 00:20:51.774 "io_failed": 0, 00:20:51.774 "io_timeout": 0, 00:20:51.774 "avg_latency_us": 30271.664526393215, 00:20:51.774 "min_latency_us": 513.5753846153846, 00:20:51.774 "max_latency_us": 43757.88307692308 00:20:51.774 } 00:20:51.774 ], 00:20:51.774 "core_count": 1 00:20:51.774 } 00:20:51.774 14:53:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:52.035 [2024-12-09 14:53:30.063142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.035 [2024-12-09 14:53:30.063446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.035 [2024-12-09 14:53:30.063472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:52.035 [2024-12-09 14:53:30.063487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.035 [2024-12-09 14:53:30.063522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.035 [2024-12-09 14:53:30.066766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.035 [2024-12-09 14:53:30.066970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.035 [2024-12-09 14:53:30.067002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:20:52.035 [2024-12-09 14:53:30.067011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.036 [2024-12-09 14:53:30.070077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.036 [2024-12-09 14:53:30.070232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.036 [2024-12-09 14:53:30.070259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:20:52.036 [2024-12-09 14:53:30.070268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.311919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.312135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.297 [2024-12-09 14:53:30.312168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 241.621 ms 00:20:52.297 [2024-12-09 14:53:30.312178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.318414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.318458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.297 [2024-12-09 14:53:30.318473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:20:52.297 [2024-12-09 14:53:30.318487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.341169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.341211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.297 [2024-12-09 14:53:30.341225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.616 ms 00:20:52.297 [2024-12-09 14:53:30.341232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.357317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.357358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.297 [2024-12-09 14:53:30.357370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.039 ms 00:20:52.297 [2024-12-09 14:53:30.357378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.357510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.357520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.297 [2024-12-09 14:53:30.357533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:52.297 [2024-12-09 14:53:30.357540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.377091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.377229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.297 [2024-12-09 14:53:30.377248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.534 ms 00:20:52.297 [2024-12-09 14:53:30.377254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.396250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.396368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.297 [2024-12-09 14:53:30.396387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.965 ms 00:20:52.297 [2024-12-09 14:53:30.396393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.297 [2024-12-09 14:53:30.414264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.297 [2024-12-09 14:53:30.414291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.297 [2024-12-09 14:53:30.414301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.844 ms 00:20:52.297 [2024-12-09 14:53:30.414306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.558 [2024-12-09 14:53:30.431851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.558 [2024-12-09 14:53:30.431876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.558 [2024-12-09 14:53:30.431888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.486 ms 00:20:52.558 [2024-12-09 14:53:30.431893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.558 [2024-12-09 14:53:30.431921] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.558 [2024-12-09 14:53:30.431933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.431998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.559 [2024-12-09 14:53:30.432565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.560 [2024-12-09 14:53:30.432633] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.560 [2024-12-09 14:53:30.432641] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 65180ac7-6499-4ec6-9b67-63923e4bcfb5 00:20:52.560 [2024-12-09 14:53:30.432649] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.560 [2024-12-09 14:53:30.432656] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.560 [2024-12-09 14:53:30.432662] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.560 [2024-12-09 14:53:30.432670] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.560 [2024-12-09 14:53:30.432675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.560 [2024-12-09 14:53:30.432684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.560 [2024-12-09 14:53:30.432690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.560 [2024-12-09 14:53:30.432698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.560 [2024-12-09 14:53:30.432704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.560 [2024-12-09 14:53:30.432712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.560 [2024-12-09 14:53:30.432718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.560 [2024-12-09 14:53:30.432726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:20:52.560 [2024-12-09 14:53:30.432731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.443211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.560 [2024-12-09 14:53:30.443236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.560 [2024-12-09 14:53:30.443246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.453 ms 00:20:52.560 [2024-12-09 14:53:30.443252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.443538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.560 [2024-12-09 14:53:30.443546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.560 [2024-12-09 14:53:30.443555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:52.560 [2024-12-09 14:53:30.443561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.472817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.472842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.560 [2024-12-09 14:53:30.472853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.472860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.472906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.472913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.560 [2024-12-09 14:53:30.472920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.472927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.472983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.472991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.560 [2024-12-09 14:53:30.472999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.473005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.473018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.473025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.560 [2024-12-09 14:53:30.473033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.473039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.536078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.536113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.560 [2024-12-09 14:53:30.536126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.536133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.588364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.588399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.560 [2024-12-09 14:53:30.588410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.588417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.588505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.588514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.560 [2024-12-09 14:53:30.588522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.588529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.588563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.588571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.560 [2024-12-09 14:53:30.588580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.588586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.588661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.588671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.560 [2024-12-09 14:53:30.588682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.588687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.588716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.588723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.560 [2024-12-09 14:53:30.588731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.588737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.588770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.588779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.560 [2024-12-09 14:53:30.588787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.588799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.588859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.560 [2024-12-09 14:53:30.588868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.560 [2024-12-09 14:53:30.588876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.560 [2024-12-09 14:53:30.588882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.560 [2024-12-09 14:53:30.589020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.835 ms, result 0 00:20:52.560 true 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77437 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77437 ']' 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77437 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77437 00:20:52.560 killing process with pid 77437 00:20:52.560 Received shutdown signal, test time was about 4.000000 seconds 00:20:52.560 00:20:52.560 Latency(us) 00:20:52.560 [2024-12-09T14:53:30.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.560 [2024-12-09T14:53:30.682Z] =================================================================================================================== 00:20:52.560 [2024-12-09T14:53:30.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77437' 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77437 00:20:52.560 14:53:30 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77437 00:20:57.857 Remove shared memory files 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:57.857 ************************************ 00:20:57.857 END TEST ftl_bdevperf 00:20:57.857 ************************************ 00:20:57.857 00:20:57.857 real 0m26.622s 00:20:57.857 user 0m29.146s 00:20:57.857 sys 0m1.037s 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.857 14:53:35 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:57.857 14:53:35 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:57.857 14:53:35 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:57.857 14:53:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.857 14:53:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:57.857 ************************************ 00:20:57.857 START TEST ftl_trim 00:20:57.857 ************************************ 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:57.857 * Looking for test storage... 00:20:57.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.857 14:53:35 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.857 --rc genhtml_branch_coverage=1 00:20:57.857 --rc genhtml_function_coverage=1 00:20:57.857 --rc genhtml_legend=1 00:20:57.857 --rc geninfo_all_blocks=1 00:20:57.857 --rc geninfo_unexecuted_blocks=1 00:20:57.857 00:20:57.857 ' 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.857 --rc genhtml_branch_coverage=1 00:20:57.857 --rc genhtml_function_coverage=1 00:20:57.857 --rc genhtml_legend=1 00:20:57.857 --rc geninfo_all_blocks=1 00:20:57.857 --rc geninfo_unexecuted_blocks=1 00:20:57.857 00:20:57.857 ' 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.857 --rc genhtml_branch_coverage=1 00:20:57.857 --rc genhtml_function_coverage=1 00:20:57.857 --rc genhtml_legend=1 00:20:57.857 --rc geninfo_all_blocks=1 00:20:57.857 --rc geninfo_unexecuted_blocks=1 00:20:57.857 00:20:57.857 ' 00:20:57.857 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:57.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.857 --rc genhtml_branch_coverage=1 00:20:57.857 --rc genhtml_function_coverage=1 00:20:57.857 --rc genhtml_legend=1 00:20:57.857 --rc geninfo_all_blocks=1 00:20:57.857 --rc geninfo_unexecuted_blocks=1 00:20:57.857 00:20:57.857 ' 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:57.857 14:53:35 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=77789 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:57.858 14:53:35 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 77789 00:20:57.858 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77789 ']' 00:20:57.858 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.858 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.858 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.858 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.858 14:53:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:58.119 [2024-12-09 14:53:36.018734] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:20:58.119 [2024-12-09 14:53:36.019171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77789 ] 00:20:58.119 [2024-12-09 14:53:36.178550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:58.380 [2024-12-09 14:53:36.272443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.380 [2024-12-09 14:53:36.275849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.380 [2024-12-09 14:53:36.275857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.953 14:53:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.953 14:53:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:58.953 14:53:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:58.953 14:53:36 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:58.953 14:53:36 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:58.953 14:53:36 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:58.953 14:53:36 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:58.953 14:53:36 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:59.214 14:53:37 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:59.214 14:53:37 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:59.214 14:53:37 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:59.214 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:59.214 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:59.214 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:59.214 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:59.214 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:59.476 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:59.476 { 00:20:59.476 "name": "nvme0n1", 00:20:59.476 "aliases": [ 00:20:59.476 "1007bd99-3a68-4365-a869-68637272b5e5" 00:20:59.476 ], 00:20:59.476 "product_name": "NVMe disk", 00:20:59.476 "block_size": 4096, 00:20:59.476 "num_blocks": 1310720, 00:20:59.476 "uuid": "1007bd99-3a68-4365-a869-68637272b5e5", 00:20:59.476 "numa_id": -1, 00:20:59.476 "assigned_rate_limits": { 00:20:59.476 "rw_ios_per_sec": 0, 00:20:59.476 "rw_mbytes_per_sec": 0, 00:20:59.476 "r_mbytes_per_sec": 0, 00:20:59.476 "w_mbytes_per_sec": 0 00:20:59.476 }, 00:20:59.476 "claimed": true, 00:20:59.476 "claim_type": "read_many_write_one", 00:20:59.476 "zoned": false, 00:20:59.476 "supported_io_types": { 00:20:59.476 "read": true, 00:20:59.476 "write": true, 00:20:59.476 "unmap": true, 00:20:59.476 "flush": true, 00:20:59.476 "reset": true, 00:20:59.476 "nvme_admin": true, 00:20:59.476 "nvme_io": true, 00:20:59.476 "nvme_io_md": false, 00:20:59.476 "write_zeroes": true, 00:20:59.476 "zcopy": false, 00:20:59.476 "get_zone_info": false, 00:20:59.476 "zone_management": false, 00:20:59.476 "zone_append": false, 00:20:59.476 "compare": true, 00:20:59.476 "compare_and_write": false, 00:20:59.476 "abort": true, 00:20:59.476 "seek_hole": false, 00:20:59.476 "seek_data": false, 00:20:59.476 "copy": true, 00:20:59.476 "nvme_iov_md": false 00:20:59.476 }, 00:20:59.476 "driver_specific": { 00:20:59.476 "nvme": [ 00:20:59.476 { 00:20:59.476 "pci_address": "0000:00:11.0", 00:20:59.476 "trid": { 00:20:59.476 "trtype": "PCIe", 00:20:59.476 "traddr": "0000:00:11.0" 00:20:59.476 }, 00:20:59.476 "ctrlr_data": { 00:20:59.476 "cntlid": 0, 00:20:59.476 "vendor_id": "0x1b36", 00:20:59.476 "model_number": "QEMU NVMe Ctrl", 00:20:59.476 "serial_number": "12341", 00:20:59.476 "firmware_revision": "8.0.0", 00:20:59.476 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:59.476 "oacs": { 00:20:59.476 "security": 0, 00:20:59.476 "format": 1, 00:20:59.476 "firmware": 0, 00:20:59.476 "ns_manage": 1 00:20:59.476 }, 00:20:59.476 "multi_ctrlr": false, 00:20:59.476 "ana_reporting": false 00:20:59.476 }, 00:20:59.476 "vs": { 00:20:59.476 "nvme_version": "1.4" 00:20:59.476 }, 00:20:59.476 "ns_data": { 00:20:59.476 "id": 1, 00:20:59.476 "can_share": false 00:20:59.476 } 00:20:59.476 } 00:20:59.476 ], 00:20:59.476 "mp_policy": "active_passive" 00:20:59.476 } 00:20:59.476 } 00:20:59.476 ]' 00:20:59.476 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:59.476 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:59.476 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:59.476 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:59.476 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:59.476 14:53:37 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:59.476 14:53:37 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:59.476 14:53:37 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:59.476 14:53:37 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:59.476 14:53:37 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:59.476 14:53:37 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:59.738 14:53:37 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6b84eef2-0b56-4bd6-be86-6c9fcfdcda9d 00:20:59.738 14:53:37 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:59.738 14:53:37 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b84eef2-0b56-4bd6-be86-6c9fcfdcda9d 00:20:59.999 14:53:37 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:59.999 14:53:38 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=f2d6774e-5e05-47ab-8b12-11edaf6a6744 00:20:59.999 14:53:38 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f2d6774e-5e05-47ab-8b12-11edaf6a6744 00:21:00.258 14:53:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.258 14:53:38 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.258 14:53:38 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:00.259 14:53:38 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:00.259 14:53:38 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.259 14:53:38 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:00.259 14:53:38 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.259 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.259 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:00.259 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:00.259 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:00.259 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.517 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:00.517 { 00:21:00.517 "name": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:00.517 "aliases": [ 00:21:00.517 "lvs/nvme0n1p0" 00:21:00.517 ], 00:21:00.517 "product_name": "Logical Volume", 00:21:00.517 "block_size": 4096, 00:21:00.517 "num_blocks": 26476544, 00:21:00.517 "uuid": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:00.517 "assigned_rate_limits": { 00:21:00.517 "rw_ios_per_sec": 0, 00:21:00.517 "rw_mbytes_per_sec": 0, 00:21:00.517 "r_mbytes_per_sec": 0, 00:21:00.517 "w_mbytes_per_sec": 0 00:21:00.517 }, 00:21:00.517 "claimed": false, 00:21:00.517 "zoned": false, 00:21:00.517 "supported_io_types": { 00:21:00.517 "read": true, 00:21:00.517 "write": true, 00:21:00.517 "unmap": true, 00:21:00.517 "flush": false, 00:21:00.517 "reset": true, 00:21:00.517 "nvme_admin": false, 00:21:00.517 "nvme_io": false, 00:21:00.517 "nvme_io_md": false, 00:21:00.517 "write_zeroes": true, 00:21:00.517 "zcopy": false, 00:21:00.517 "get_zone_info": false, 00:21:00.517 "zone_management": false, 00:21:00.517 "zone_append": false, 00:21:00.517 "compare": false, 00:21:00.517 "compare_and_write": false, 00:21:00.517 "abort": false, 00:21:00.517 "seek_hole": true, 00:21:00.517 "seek_data": true, 00:21:00.517 "copy": false, 00:21:00.517 "nvme_iov_md": false 00:21:00.517 }, 00:21:00.517 "driver_specific": { 00:21:00.517 "lvol": { 00:21:00.517 "lvol_store_uuid": "f2d6774e-5e05-47ab-8b12-11edaf6a6744", 00:21:00.517 "base_bdev": "nvme0n1", 00:21:00.517 "thin_provision": true, 00:21:00.517 "num_allocated_clusters": 0, 00:21:00.517 "snapshot": false, 00:21:00.517 "clone": false, 00:21:00.517 "esnap_clone": false 00:21:00.517 } 00:21:00.517 } 00:21:00.517 } 00:21:00.517 ]' 00:21:00.517 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:00.517 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:00.517 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:00.517 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:00.517 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:00.517 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:00.517 14:53:38 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:00.517 14:53:38 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:00.517 14:53:38 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:00.775 14:53:38 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:00.775 14:53:38 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:00.775 14:53:38 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.775 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b74a849b-b692-4093-a2a5-87680f56baac 00:21:00.775 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:00.775 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:00.775 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:00.775 14:53:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b74a849b-b692-4093-a2a5-87680f56baac 00:21:01.034 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:01.034 { 00:21:01.034 "name": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:01.034 "aliases": [ 00:21:01.034 "lvs/nvme0n1p0" 00:21:01.034 ], 00:21:01.034 "product_name": "Logical Volume", 00:21:01.034 "block_size": 4096, 00:21:01.034 "num_blocks": 26476544, 00:21:01.034 "uuid": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:01.034 "assigned_rate_limits": { 00:21:01.034 "rw_ios_per_sec": 0, 00:21:01.034 "rw_mbytes_per_sec": 0, 00:21:01.034 "r_mbytes_per_sec": 0, 00:21:01.034 "w_mbytes_per_sec": 0 00:21:01.034 }, 00:21:01.034 "claimed": false, 00:21:01.034 "zoned": false, 00:21:01.034 "supported_io_types": { 00:21:01.034 "read": true, 00:21:01.034 "write": true, 00:21:01.034 "unmap": true, 00:21:01.034 "flush": false, 00:21:01.034 "reset": true, 00:21:01.034 "nvme_admin": false, 00:21:01.034 "nvme_io": false, 00:21:01.034 "nvme_io_md": false, 00:21:01.034 "write_zeroes": true, 00:21:01.034 "zcopy": false, 00:21:01.034 "get_zone_info": false, 00:21:01.034 "zone_management": false, 00:21:01.034 "zone_append": false, 00:21:01.034 "compare": false, 00:21:01.034 "compare_and_write": false, 00:21:01.034 "abort": false, 00:21:01.034 "seek_hole": true, 00:21:01.034 "seek_data": true, 00:21:01.034 "copy": false, 00:21:01.034 "nvme_iov_md": false 00:21:01.034 }, 00:21:01.034 "driver_specific": { 00:21:01.034 "lvol": { 00:21:01.034 "lvol_store_uuid": "f2d6774e-5e05-47ab-8b12-11edaf6a6744", 00:21:01.034 "base_bdev": "nvme0n1", 00:21:01.034 "thin_provision": true, 00:21:01.034 "num_allocated_clusters": 0, 00:21:01.034 "snapshot": false, 00:21:01.034 "clone": false, 00:21:01.034 "esnap_clone": false 00:21:01.034 } 00:21:01.034 } 00:21:01.034 } 00:21:01.034 ]' 00:21:01.034 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:01.034 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:01.034 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:01.034 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:01.035 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:01.035 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:01.035 14:53:39 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:01.035 14:53:39 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:01.293 14:53:39 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:01.293 14:53:39 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:01.293 14:53:39 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size b74a849b-b692-4093-a2a5-87680f56baac 00:21:01.293 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b74a849b-b692-4093-a2a5-87680f56baac 00:21:01.293 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:01.293 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:01.293 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:01.293 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b74a849b-b692-4093-a2a5-87680f56baac 00:21:01.552 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:01.552 { 00:21:01.552 "name": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:01.552 "aliases": [ 00:21:01.552 "lvs/nvme0n1p0" 00:21:01.552 ], 00:21:01.552 "product_name": "Logical Volume", 00:21:01.552 "block_size": 4096, 00:21:01.552 "num_blocks": 26476544, 00:21:01.552 "uuid": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:01.552 "assigned_rate_limits": { 00:21:01.552 "rw_ios_per_sec": 0, 00:21:01.552 "rw_mbytes_per_sec": 0, 00:21:01.552 "r_mbytes_per_sec": 0, 00:21:01.552 "w_mbytes_per_sec": 0 00:21:01.552 }, 00:21:01.552 "claimed": false, 00:21:01.552 "zoned": false, 00:21:01.552 "supported_io_types": { 00:21:01.552 "read": true, 00:21:01.552 "write": true, 00:21:01.552 "unmap": true, 00:21:01.552 "flush": false, 00:21:01.552 "reset": true, 00:21:01.552 "nvme_admin": false, 00:21:01.552 "nvme_io": false, 00:21:01.552 "nvme_io_md": false, 00:21:01.552 "write_zeroes": true, 00:21:01.552 "zcopy": false, 00:21:01.552 "get_zone_info": false, 00:21:01.552 "zone_management": false, 00:21:01.552 "zone_append": false, 00:21:01.552 "compare": false, 00:21:01.552 "compare_and_write": false, 00:21:01.552 "abort": false, 00:21:01.552 "seek_hole": true, 00:21:01.552 "seek_data": true, 00:21:01.552 "copy": false, 00:21:01.552 "nvme_iov_md": false 00:21:01.552 }, 00:21:01.552 "driver_specific": { 00:21:01.552 "lvol": { 00:21:01.552 "lvol_store_uuid": "f2d6774e-5e05-47ab-8b12-11edaf6a6744", 00:21:01.552 "base_bdev": "nvme0n1", 00:21:01.552 "thin_provision": true, 00:21:01.552 "num_allocated_clusters": 0, 00:21:01.552 "snapshot": false, 00:21:01.552 "clone": false, 00:21:01.552 "esnap_clone": false 00:21:01.552 } 00:21:01.552 } 00:21:01.552 } 00:21:01.552 ]' 00:21:01.552 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:01.552 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:01.552 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:01.552 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:01.552 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:01.552 14:53:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:01.552 14:53:39 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:01.552 14:53:39 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b74a849b-b692-4093-a2a5-87680f56baac -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:01.811 [2024-12-09 14:53:39.771604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.771641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.811 [2024-12-09 14:53:39.771656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:01.811 [2024-12-09 14:53:39.771662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.811 [2024-12-09 14:53:39.773984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.774012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.811 [2024-12-09 14:53:39.774023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:21:01.811 [2024-12-09 14:53:39.774030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.811 [2024-12-09 14:53:39.774109] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.811 [2024-12-09 14:53:39.774655] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.811 [2024-12-09 14:53:39.774679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.774686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.811 [2024-12-09 14:53:39.774695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:21:01.811 [2024-12-09 14:53:39.774701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.811 [2024-12-09 14:53:39.774786] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5cea04ff-c544-4eb1-8911-42d62e850592 00:21:01.811 [2024-12-09 14:53:39.776041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.776071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:01.811 [2024-12-09 14:53:39.776081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:01.811 [2024-12-09 14:53:39.776091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.811 [2024-12-09 14:53:39.782731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.782894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.811 [2024-12-09 14:53:39.782910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.567 ms 00:21:01.811 [2024-12-09 14:53:39.782918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.811 [2024-12-09 14:53:39.783025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.783037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.811 [2024-12-09 14:53:39.783044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:01.811 [2024-12-09 14:53:39.783053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.811 [2024-12-09 14:53:39.783084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.783093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:01.811 [2024-12-09 14:53:39.783100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:01.811 [2024-12-09 14:53:39.783109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.811 [2024-12-09 14:53:39.783136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:01.811 [2024-12-09 14:53:39.786288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.811 [2024-12-09 14:53:39.786394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.811 [2024-12-09 14:53:39.786410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:21:01.811 [2024-12-09 14:53:39.786417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.812 [2024-12-09 14:53:39.786468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.812 [2024-12-09 14:53:39.786487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:01.812 [2024-12-09 14:53:39.786495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:01.812 [2024-12-09 14:53:39.786501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.812 [2024-12-09 14:53:39.786528] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:01.812 [2024-12-09 14:53:39.786641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:01.812 [2024-12-09 14:53:39.786654] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:01.812 [2024-12-09 14:53:39.786663] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:01.812 [2024-12-09 14:53:39.786673] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:01.812 [2024-12-09 14:53:39.786680] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:01.812 [2024-12-09 14:53:39.786688] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:01.812 [2024-12-09 14:53:39.786693] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:01.812 [2024-12-09 14:53:39.786702] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:01.812 [2024-12-09 14:53:39.786710] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:01.812 [2024-12-09 14:53:39.786717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.812 [2024-12-09 14:53:39.786723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:01.812 [2024-12-09 14:53:39.786731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:21:01.812 [2024-12-09 14:53:39.786737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.812 [2024-12-09 14:53:39.786836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.812 [2024-12-09 14:53:39.786844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:01.812 [2024-12-09 14:53:39.786852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:01.812 [2024-12-09 14:53:39.786858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.812 [2024-12-09 14:53:39.786964] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:01.812 [2024-12-09 14:53:39.786972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:01.812 [2024-12-09 14:53:39.786980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.812 [2024-12-09 14:53:39.786986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.812 [2024-12-09 14:53:39.786994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:01.812 [2024-12-09 14:53:39.786999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:01.812 [2024-12-09 14:53:39.787017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.812 [2024-12-09 14:53:39.787029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:01.812 [2024-12-09 14:53:39.787034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:01.812 [2024-12-09 14:53:39.787044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.812 [2024-12-09 14:53:39.787049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:01.812 [2024-12-09 14:53:39.787057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:01.812 [2024-12-09 14:53:39.787063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:01.812 [2024-12-09 14:53:39.787076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:01.812 [2024-12-09 14:53:39.787096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:01.812 [2024-12-09 14:53:39.787112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:01.812 [2024-12-09 14:53:39.787132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:01.812 [2024-12-09 14:53:39.787149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:01.812 [2024-12-09 14:53:39.787169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.812 [2024-12-09 14:53:39.787180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:01.812 [2024-12-09 14:53:39.787185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:01.812 [2024-12-09 14:53:39.787191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.812 [2024-12-09 14:53:39.787196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:01.812 [2024-12-09 14:53:39.787205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:01.812 [2024-12-09 14:53:39.787210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:01.812 [2024-12-09 14:53:39.787221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:01.812 [2024-12-09 14:53:39.787228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787233] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:01.812 [2024-12-09 14:53:39.787239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:01.812 [2024-12-09 14:53:39.787245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.812 [2024-12-09 14:53:39.787259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:01.812 [2024-12-09 14:53:39.787267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:01.812 [2024-12-09 14:53:39.787272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:01.812 [2024-12-09 14:53:39.787278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:01.812 [2024-12-09 14:53:39.787283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:01.812 [2024-12-09 14:53:39.787289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:01.812 [2024-12-09 14:53:39.787296] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:01.812 [2024-12-09 14:53:39.787304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.812 [2024-12-09 14:53:39.787313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:01.812 [2024-12-09 14:53:39.787319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:01.812 [2024-12-09 14:53:39.787326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:01.812 [2024-12-09 14:53:39.787334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:01.812 [2024-12-09 14:53:39.787339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:01.812 [2024-12-09 14:53:39.787346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:01.812 [2024-12-09 14:53:39.787352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:01.812 [2024-12-09 14:53:39.787359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:01.812 [2024-12-09 14:53:39.787365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:01.812 [2024-12-09 14:53:39.787374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:01.812 [2024-12-09 14:53:39.787379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:01.812 [2024-12-09 14:53:39.787386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:01.812 [2024-12-09 14:53:39.787392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:01.812 [2024-12-09 14:53:39.787399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:01.812 [2024-12-09 14:53:39.787404] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:01.812 [2024-12-09 14:53:39.787413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.813 [2024-12-09 14:53:39.787420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:01.813 [2024-12-09 14:53:39.787427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:01.813 [2024-12-09 14:53:39.787432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:01.813 [2024-12-09 14:53:39.787439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:01.813 [2024-12-09 14:53:39.787445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.813 [2024-12-09 14:53:39.787452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:01.813 [2024-12-09 14:53:39.787459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:21:01.813 [2024-12-09 14:53:39.787465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.813 [2024-12-09 14:53:39.787544] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:01.813 [2024-12-09 14:53:39.787555] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:04.343 [2024-12-09 14:53:42.363771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.364039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:04.343 [2024-12-09 14:53:42.364183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2576.217 ms 00:21:04.343 [2024-12-09 14:53:42.364213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.343 [2024-12-09 14:53:42.392558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.392730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.343 [2024-12-09 14:53:42.392814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.079 ms 00:21:04.343 [2024-12-09 14:53:42.392847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.343 [2024-12-09 14:53:42.392997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.393230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:04.343 [2024-12-09 14:53:42.393274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:04.343 [2024-12-09 14:53:42.393298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.343 [2024-12-09 14:53:42.443716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.443901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.343 [2024-12-09 14:53:42.444097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.365 ms 00:21:04.343 [2024-12-09 14:53:42.444136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.343 [2024-12-09 14:53:42.444233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.444264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.343 [2024-12-09 14:53:42.444285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.343 [2024-12-09 14:53:42.444306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.343 [2024-12-09 14:53:42.444907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.445015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.343 [2024-12-09 14:53:42.445071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:21:04.343 [2024-12-09 14:53:42.445096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.343 [2024-12-09 14:53:42.445231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.445348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.343 [2024-12-09 14:53:42.445387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:04.343 [2024-12-09 14:53:42.445410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.343 [2024-12-09 14:53:42.461575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.343 [2024-12-09 14:53:42.461694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.343 [2024-12-09 14:53:42.461747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.117 ms 00:21:04.343 [2024-12-09 14:53:42.461773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.474057] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:04.601 [2024-12-09 14:53:42.491691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.491810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:04.601 [2024-12-09 14:53:42.491861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.768 ms 00:21:04.601 [2024-12-09 14:53:42.491884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.567623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.567761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:04.601 [2024-12-09 14:53:42.567838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.654 ms 00:21:04.601 [2024-12-09 14:53:42.567864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.568112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.568145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:04.601 [2024-12-09 14:53:42.568207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:21:04.601 [2024-12-09 14:53:42.568230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.591697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.591815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:04.601 [2024-12-09 14:53:42.591896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.423 ms 00:21:04.601 [2024-12-09 14:53:42.591923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.614041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.614143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:04.601 [2024-12-09 14:53:42.614206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.041 ms 00:21:04.601 [2024-12-09 14:53:42.614228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.614830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.614939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:04.601 [2024-12-09 14:53:42.614988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:04.601 [2024-12-09 14:53:42.615041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.689257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.689374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:04.601 [2024-12-09 14:53:42.689430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.157 ms 00:21:04.601 [2024-12-09 14:53:42.689454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.601 [2024-12-09 14:53:42.714451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.601 [2024-12-09 14:53:42.714561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:04.601 [2024-12-09 14:53:42.714614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.855 ms 00:21:04.601 [2024-12-09 14:53:42.714637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.859 [2024-12-09 14:53:42.738552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.859 [2024-12-09 14:53:42.738656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:04.859 [2024-12-09 14:53:42.738706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.776 ms 00:21:04.859 [2024-12-09 14:53:42.738728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.859 [2024-12-09 14:53:42.761871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.859 [2024-12-09 14:53:42.762011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:04.859 [2024-12-09 14:53:42.762071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.848 ms 00:21:04.859 [2024-12-09 14:53:42.762096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.859 [2024-12-09 14:53:42.762170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.859 [2024-12-09 14:53:42.762198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:04.859 [2024-12-09 14:53:42.762223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:04.859 [2024-12-09 14:53:42.762242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.859 [2024-12-09 14:53:42.762337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.859 [2024-12-09 14:53:42.762405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:04.859 [2024-12-09 14:53:42.762420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:04.859 [2024-12-09 14:53:42.762428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.859 [2024-12-09 14:53:42.763327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.859 [2024-12-09 14:53:42.766281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2991.420 ms, result 0 00:21:04.859 [2024-12-09 14:53:42.767305] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.859 { 00:21:04.859 "name": "ftl0", 00:21:04.859 "uuid": "5cea04ff-c544-4eb1-8911-42d62e850592" 00:21:04.859 } 00:21:04.859 14:53:42 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:04.859 14:53:42 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:04.859 14:53:42 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:04.859 14:53:42 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:04.859 14:53:42 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:04.859 14:53:42 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:04.859 14:53:42 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:05.117 14:53:42 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:05.117 [ 00:21:05.117 { 00:21:05.117 "name": "ftl0", 00:21:05.117 "aliases": [ 00:21:05.117 "5cea04ff-c544-4eb1-8911-42d62e850592" 00:21:05.117 ], 00:21:05.117 "product_name": "FTL disk", 00:21:05.117 "block_size": 4096, 00:21:05.117 "num_blocks": 23592960, 00:21:05.117 "uuid": "5cea04ff-c544-4eb1-8911-42d62e850592", 00:21:05.117 "assigned_rate_limits": { 00:21:05.117 "rw_ios_per_sec": 0, 00:21:05.117 "rw_mbytes_per_sec": 0, 00:21:05.117 "r_mbytes_per_sec": 0, 00:21:05.117 "w_mbytes_per_sec": 0 00:21:05.117 }, 00:21:05.117 "claimed": false, 00:21:05.117 "zoned": false, 00:21:05.117 "supported_io_types": { 00:21:05.117 "read": true, 00:21:05.117 "write": true, 00:21:05.117 "unmap": true, 00:21:05.117 "flush": true, 00:21:05.117 "reset": false, 00:21:05.117 "nvme_admin": false, 00:21:05.117 "nvme_io": false, 00:21:05.117 "nvme_io_md": false, 00:21:05.117 "write_zeroes": true, 00:21:05.117 "zcopy": false, 00:21:05.117 "get_zone_info": false, 00:21:05.117 "zone_management": false, 00:21:05.117 "zone_append": false, 00:21:05.117 "compare": false, 00:21:05.117 "compare_and_write": false, 00:21:05.117 "abort": false, 00:21:05.117 "seek_hole": false, 00:21:05.117 "seek_data": false, 00:21:05.117 "copy": false, 00:21:05.117 "nvme_iov_md": false 00:21:05.117 }, 00:21:05.117 "driver_specific": { 00:21:05.117 "ftl": { 00:21:05.117 "base_bdev": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:05.117 "cache": "nvc0n1p0" 00:21:05.117 } 00:21:05.117 } 00:21:05.117 } 00:21:05.117 ] 00:21:05.117 14:53:43 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:05.117 14:53:43 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:05.117 14:53:43 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:05.376 14:53:43 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:05.376 14:53:43 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:05.635 14:53:43 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:05.635 { 00:21:05.635 "name": "ftl0", 00:21:05.635 "aliases": [ 00:21:05.635 "5cea04ff-c544-4eb1-8911-42d62e850592" 00:21:05.635 ], 00:21:05.635 "product_name": "FTL disk", 00:21:05.635 "block_size": 4096, 00:21:05.635 "num_blocks": 23592960, 00:21:05.635 "uuid": "5cea04ff-c544-4eb1-8911-42d62e850592", 00:21:05.635 "assigned_rate_limits": { 00:21:05.635 "rw_ios_per_sec": 0, 00:21:05.635 "rw_mbytes_per_sec": 0, 00:21:05.635 "r_mbytes_per_sec": 0, 00:21:05.635 "w_mbytes_per_sec": 0 00:21:05.635 }, 00:21:05.635 "claimed": false, 00:21:05.635 "zoned": false, 00:21:05.635 "supported_io_types": { 00:21:05.635 "read": true, 00:21:05.635 "write": true, 00:21:05.635 "unmap": true, 00:21:05.635 "flush": true, 00:21:05.635 "reset": false, 00:21:05.635 "nvme_admin": false, 00:21:05.635 "nvme_io": false, 00:21:05.635 "nvme_io_md": false, 00:21:05.635 "write_zeroes": true, 00:21:05.635 "zcopy": false, 00:21:05.635 "get_zone_info": false, 00:21:05.635 "zone_management": false, 00:21:05.635 "zone_append": false, 00:21:05.635 "compare": false, 00:21:05.635 "compare_and_write": false, 00:21:05.635 "abort": false, 00:21:05.635 "seek_hole": false, 00:21:05.635 "seek_data": false, 00:21:05.635 "copy": false, 00:21:05.635 "nvme_iov_md": false 00:21:05.635 }, 00:21:05.635 "driver_specific": { 00:21:05.635 "ftl": { 00:21:05.635 "base_bdev": "b74a849b-b692-4093-a2a5-87680f56baac", 00:21:05.635 "cache": "nvc0n1p0" 00:21:05.635 } 00:21:05.635 } 00:21:05.635 } 00:21:05.635 ]' 00:21:05.635 14:53:43 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:05.635 14:53:43 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:05.635 14:53:43 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:05.635 [2024-12-09 14:53:43.682317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.682350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:05.635 [2024-12-09 14:53:43.682361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:05.635 [2024-12-09 14:53:43.682371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.682400] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:05.635 [2024-12-09 14:53:43.684609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.686522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:05.635 [2024-12-09 14:53:43.686546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.195 ms 00:21:05.635 [2024-12-09 14:53:43.686554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.687019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.687034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:05.635 [2024-12-09 14:53:43.687043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:21:05.635 [2024-12-09 14:53:43.687050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.689789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.689886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:05.635 [2024-12-09 14:53:43.689899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:21:05.635 [2024-12-09 14:53:43.689906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.695340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.695362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:05.635 [2024-12-09 14:53:43.695371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.397 ms 00:21:05.635 [2024-12-09 14:53:43.695377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.712849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.712952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:05.635 [2024-12-09 14:53:43.712971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.412 ms 00:21:05.635 [2024-12-09 14:53:43.712976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.725251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.725278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:05.635 [2024-12-09 14:53:43.725290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.225 ms 00:21:05.635 [2024-12-09 14:53:43.725299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.725468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.725477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:05.635 [2024-12-09 14:53:43.725486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:21:05.635 [2024-12-09 14:53:43.725491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.635 [2024-12-09 14:53:43.743166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.635 [2024-12-09 14:53:43.743190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:05.635 [2024-12-09 14:53:43.743200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.643 ms 00:21:05.635 [2024-12-09 14:53:43.743206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.895 [2024-12-09 14:53:43.761026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.895 [2024-12-09 14:53:43.761050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:05.895 [2024-12-09 14:53:43.761061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.764 ms 00:21:05.895 [2024-12-09 14:53:43.761067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.895 [2024-12-09 14:53:43.778188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.895 [2024-12-09 14:53:43.778212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:05.895 [2024-12-09 14:53:43.778222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.069 ms 00:21:05.895 [2024-12-09 14:53:43.778228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.895 [2024-12-09 14:53:43.795336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.895 [2024-12-09 14:53:43.795360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:05.895 [2024-12-09 14:53:43.795369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.009 ms 00:21:05.895 [2024-12-09 14:53:43.795375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.895 [2024-12-09 14:53:43.795422] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:05.895 [2024-12-09 14:53:43.795434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:05.895 [2024-12-09 14:53:43.795905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.795995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:05.896 [2024-12-09 14:53:43.796175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:05.896 [2024-12-09 14:53:43.796184] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cea04ff-c544-4eb1-8911-42d62e850592 00:21:05.896 [2024-12-09 14:53:43.796190] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:05.896 [2024-12-09 14:53:43.796197] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:05.896 [2024-12-09 14:53:43.796201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:05.896 [2024-12-09 14:53:43.796210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:05.896 [2024-12-09 14:53:43.796216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:05.896 [2024-12-09 14:53:43.796223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:05.896 [2024-12-09 14:53:43.796228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:05.896 [2024-12-09 14:53:43.796234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:05.896 [2024-12-09 14:53:43.796238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:05.896 [2024-12-09 14:53:43.796245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.896 [2024-12-09 14:53:43.796251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:05.896 [2024-12-09 14:53:43.796259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:21:05.896 [2024-12-09 14:53:43.796264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.806164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.896 [2024-12-09 14:53:43.806189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:05.896 [2024-12-09 14:53:43.806200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.869 ms 00:21:05.896 [2024-12-09 14:53:43.806206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.806514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.896 [2024-12-09 14:53:43.806522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:05.896 [2024-12-09 14:53:43.806530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:21:05.896 [2024-12-09 14:53:43.806536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.843042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.843070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.896 [2024-12-09 14:53:43.843080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.843087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.843160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.843167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.896 [2024-12-09 14:53:43.843176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.843182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.843233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.843242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.896 [2024-12-09 14:53:43.843254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.843260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.843286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.843292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.896 [2024-12-09 14:53:43.843300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.843306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.909447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.909486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.896 [2024-12-09 14:53:43.909498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.909505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.960487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.960525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.896 [2024-12-09 14:53:43.960536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.960543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.960624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.960632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.896 [2024-12-09 14:53:43.960643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.960651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.960696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.960703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.896 [2024-12-09 14:53:43.960711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.960716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.960828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.960837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.896 [2024-12-09 14:53:43.960846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.960855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.960907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.960915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:05.896 [2024-12-09 14:53:43.960923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.960929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.960978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.960986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.896 [2024-12-09 14:53:43.960995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.961001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.961056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.896 [2024-12-09 14:53:43.961064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.896 [2024-12-09 14:53:43.961072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.896 [2024-12-09 14:53:43.961077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.896 [2024-12-09 14:53:43.961244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.899 ms, result 0 00:21:05.896 true 00:21:05.897 14:53:43 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 77789 00:21:05.897 14:53:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77789 ']' 00:21:05.897 14:53:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77789 00:21:05.897 14:53:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:05.897 14:53:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.897 14:53:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77789 00:21:05.897 killing process with pid 77789 00:21:05.897 14:53:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.897 14:53:44 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.897 14:53:44 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77789' 00:21:05.897 14:53:44 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77789 00:21:05.897 14:53:44 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77789 00:21:12.460 14:53:49 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:12.460 65536+0 records in 00:21:12.460 65536+0 records out 00:21:12.460 268435456 bytes (268 MB, 256 MiB) copied, 1.10075 s, 244 MB/s 00:21:12.460 14:53:50 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:12.718 [2024-12-09 14:53:50.607847] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:21:12.718 [2024-12-09 14:53:50.607929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77971 ] 00:21:12.718 [2024-12-09 14:53:50.756079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.718 [2024-12-09 14:53:50.830774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.978 [2024-12-09 14:53:51.055129] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.978 [2024-12-09 14:53:51.055195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:13.240 [2024-12-09 14:53:51.211972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.240 [2024-12-09 14:53:51.212010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:13.240 [2024-12-09 14:53:51.212022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:13.240 [2024-12-09 14:53:51.212029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.240 [2024-12-09 14:53:51.214316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.240 [2024-12-09 14:53:51.214346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:13.240 [2024-12-09 14:53:51.214355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:21:13.240 [2024-12-09 14:53:51.214362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.240 [2024-12-09 14:53:51.214425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:13.240 [2024-12-09 14:53:51.214984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:13.240 [2024-12-09 14:53:51.214998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.240 [2024-12-09 14:53:51.215005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:13.240 [2024-12-09 14:53:51.215012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:21:13.240 [2024-12-09 14:53:51.215018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.240 [2024-12-09 14:53:51.216506] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:13.240 [2024-12-09 14:53:51.226956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.240 [2024-12-09 14:53:51.226986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:13.240 [2024-12-09 14:53:51.226996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.452 ms 00:21:13.240 [2024-12-09 14:53:51.227003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.240 [2024-12-09 14:53:51.227076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.240 [2024-12-09 14:53:51.227085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:13.241 [2024-12-09 14:53:51.227092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:13.241 [2024-12-09 14:53:51.227099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.233402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.241 [2024-12-09 14:53:51.233427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:13.241 [2024-12-09 14:53:51.233434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.272 ms 00:21:13.241 [2024-12-09 14:53:51.233440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.233515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.241 [2024-12-09 14:53:51.233523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:13.241 [2024-12-09 14:53:51.233530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:13.241 [2024-12-09 14:53:51.233536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.233556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.241 [2024-12-09 14:53:51.233563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:13.241 [2024-12-09 14:53:51.233570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:13.241 [2024-12-09 14:53:51.233576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.233598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:13.241 [2024-12-09 14:53:51.236694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.241 [2024-12-09 14:53:51.236869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:13.241 [2024-12-09 14:53:51.236882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.103 ms 00:21:13.241 [2024-12-09 14:53:51.236890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.236925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.241 [2024-12-09 14:53:51.236933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:13.241 [2024-12-09 14:53:51.236939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:13.241 [2024-12-09 14:53:51.236945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.236962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:13.241 [2024-12-09 14:53:51.236979] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:13.241 [2024-12-09 14:53:51.237009] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:13.241 [2024-12-09 14:53:51.237022] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:13.241 [2024-12-09 14:53:51.237105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:13.241 [2024-12-09 14:53:51.237114] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:13.241 [2024-12-09 14:53:51.237122] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:13.241 [2024-12-09 14:53:51.237133] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237147] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:13.241 [2024-12-09 14:53:51.237153] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:13.241 [2024-12-09 14:53:51.237160] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:13.241 [2024-12-09 14:53:51.237166] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:13.241 [2024-12-09 14:53:51.237172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.241 [2024-12-09 14:53:51.237177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:13.241 [2024-12-09 14:53:51.237183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:21:13.241 [2024-12-09 14:53:51.237189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.237266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.241 [2024-12-09 14:53:51.237276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:13.241 [2024-12-09 14:53:51.237282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:13.241 [2024-12-09 14:53:51.237287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.241 [2024-12-09 14:53:51.237366] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:13.241 [2024-12-09 14:53:51.237374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:13.241 [2024-12-09 14:53:51.237381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:13.241 [2024-12-09 14:53:51.237400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:13.241 [2024-12-09 14:53:51.237417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:13.241 [2024-12-09 14:53:51.237428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:13.241 [2024-12-09 14:53:51.237440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:13.241 [2024-12-09 14:53:51.237447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:13.241 [2024-12-09 14:53:51.237452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:13.241 [2024-12-09 14:53:51.237458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:13.241 [2024-12-09 14:53:51.237463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:13.241 [2024-12-09 14:53:51.237474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:13.241 [2024-12-09 14:53:51.237489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:13.241 [2024-12-09 14:53:51.237504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:13.241 [2024-12-09 14:53:51.237519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:13.241 [2024-12-09 14:53:51.237536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:13.241 [2024-12-09 14:53:51.237551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:13.241 [2024-12-09 14:53:51.237562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:13.241 [2024-12-09 14:53:51.237567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:13.241 [2024-12-09 14:53:51.237572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:13.241 [2024-12-09 14:53:51.237577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:13.241 [2024-12-09 14:53:51.237582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:13.241 [2024-12-09 14:53:51.237587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:13.241 [2024-12-09 14:53:51.237597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:13.241 [2024-12-09 14:53:51.237602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237607] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:13.241 [2024-12-09 14:53:51.237613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:13.241 [2024-12-09 14:53:51.237621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.241 [2024-12-09 14:53:51.237633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:13.241 [2024-12-09 14:53:51.237638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:13.241 [2024-12-09 14:53:51.237643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:13.241 [2024-12-09 14:53:51.237648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:13.241 [2024-12-09 14:53:51.237653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:13.241 [2024-12-09 14:53:51.237658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:13.241 [2024-12-09 14:53:51.237665] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:13.241 [2024-12-09 14:53:51.237672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:13.241 [2024-12-09 14:53:51.237678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:13.241 [2024-12-09 14:53:51.237684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:13.241 [2024-12-09 14:53:51.237689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:13.241 [2024-12-09 14:53:51.237695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:13.241 [2024-12-09 14:53:51.237700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:13.241 [2024-12-09 14:53:51.237705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:13.242 [2024-12-09 14:53:51.237711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:13.242 [2024-12-09 14:53:51.237716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:13.242 [2024-12-09 14:53:51.237722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:13.242 [2024-12-09 14:53:51.237727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:13.242 [2024-12-09 14:53:51.237732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:13.242 [2024-12-09 14:53:51.237738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:13.242 [2024-12-09 14:53:51.237742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:13.242 [2024-12-09 14:53:51.237749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:13.242 [2024-12-09 14:53:51.237754] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:13.242 [2024-12-09 14:53:51.237760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:13.242 [2024-12-09 14:53:51.237767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:13.242 [2024-12-09 14:53:51.237772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:13.242 [2024-12-09 14:53:51.237778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:13.242 [2024-12-09 14:53:51.237784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:13.242 [2024-12-09 14:53:51.237789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.237798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:13.242 [2024-12-09 14:53:51.237815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:21:13.242 [2024-12-09 14:53:51.237821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.261999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.262026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:13.242 [2024-12-09 14:53:51.262035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.119 ms 00:21:13.242 [2024-12-09 14:53:51.262043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.262138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.262146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:13.242 [2024-12-09 14:53:51.262152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:13.242 [2024-12-09 14:53:51.262159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.303335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.303367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:13.242 [2024-12-09 14:53:51.303379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.159 ms 00:21:13.242 [2024-12-09 14:53:51.303386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.303461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.303471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:13.242 [2024-12-09 14:53:51.303478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:13.242 [2024-12-09 14:53:51.303484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.303891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.303905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:13.242 [2024-12-09 14:53:51.303918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:21:13.242 [2024-12-09 14:53:51.303924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.304041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.304050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:13.242 [2024-12-09 14:53:51.304058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:13.242 [2024-12-09 14:53:51.304064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.316339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.316364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.242 [2024-12-09 14:53:51.316372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.258 ms 00:21:13.242 [2024-12-09 14:53:51.316378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.327143] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:13.242 [2024-12-09 14:53:51.327172] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:13.242 [2024-12-09 14:53:51.327182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.327189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:13.242 [2024-12-09 14:53:51.327196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.724 ms 00:21:13.242 [2024-12-09 14:53:51.327202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.346002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.346041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:13.242 [2024-12-09 14:53:51.346050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.742 ms 00:21:13.242 [2024-12-09 14:53:51.346057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.242 [2024-12-09 14:53:51.355385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.242 [2024-12-09 14:53:51.355410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:13.242 [2024-12-09 14:53:51.355417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.275 ms 00:21:13.242 [2024-12-09 14:53:51.355423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.503 [2024-12-09 14:53:51.364566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.503 [2024-12-09 14:53:51.364589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:13.503 [2024-12-09 14:53:51.364597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.103 ms 00:21:13.503 [2024-12-09 14:53:51.364602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.503 [2024-12-09 14:53:51.365090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.503 [2024-12-09 14:53:51.365103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:13.503 [2024-12-09 14:53:51.365111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:21:13.503 [2024-12-09 14:53:51.365117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.503 [2024-12-09 14:53:51.413827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.503 [2024-12-09 14:53:51.413862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:13.503 [2024-12-09 14:53:51.413871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.692 ms 00:21:13.503 [2024-12-09 14:53:51.413878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.503 [2024-12-09 14:53:51.422061] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:13.503 [2024-12-09 14:53:51.436465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.503 [2024-12-09 14:53:51.436633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:13.503 [2024-12-09 14:53:51.436647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.511 ms 00:21:13.503 [2024-12-09 14:53:51.436654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.503 [2024-12-09 14:53:51.436733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.503 [2024-12-09 14:53:51.436742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:13.503 [2024-12-09 14:53:51.436750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:13.504 [2024-12-09 14:53:51.436756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.504 [2024-12-09 14:53:51.436817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.504 [2024-12-09 14:53:51.436826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:13.504 [2024-12-09 14:53:51.436832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:13.504 [2024-12-09 14:53:51.436839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.504 [2024-12-09 14:53:51.436868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.504 [2024-12-09 14:53:51.436877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:13.504 [2024-12-09 14:53:51.436883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:13.504 [2024-12-09 14:53:51.436889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.504 [2024-12-09 14:53:51.436918] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:13.504 [2024-12-09 14:53:51.436926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.504 [2024-12-09 14:53:51.436933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:13.504 [2024-12-09 14:53:51.436940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:13.504 [2024-12-09 14:53:51.436947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.504 [2024-12-09 14:53:51.456280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.504 [2024-12-09 14:53:51.456307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:13.504 [2024-12-09 14:53:51.456316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.315 ms 00:21:13.504 [2024-12-09 14:53:51.456323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.504 [2024-12-09 14:53:51.456395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.504 [2024-12-09 14:53:51.456403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:13.504 [2024-12-09 14:53:51.456410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:13.504 [2024-12-09 14:53:51.456417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.504 [2024-12-09 14:53:51.457185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:13.504 [2024-12-09 14:53:51.459472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 244.959 ms, result 0 00:21:13.504 [2024-12-09 14:53:51.460661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:13.504 [2024-12-09 14:53:51.471494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:14.447  [2024-12-09T14:53:53.514Z] Copying: 18/256 [MB] (18 MBps) [2024-12-09T14:53:54.899Z] Copying: 40/256 [MB] (22 MBps) [2024-12-09T14:53:55.842Z] Copying: 59/256 [MB] (19 MBps) [2024-12-09T14:53:56.784Z] Copying: 74/256 [MB] (14 MBps) [2024-12-09T14:53:57.727Z] Copying: 89/256 [MB] (14 MBps) [2024-12-09T14:53:58.720Z] Copying: 101/256 [MB] (12 MBps) [2024-12-09T14:53:59.679Z] Copying: 120/256 [MB] (19 MBps) [2024-12-09T14:54:00.624Z] Copying: 141/256 [MB] (20 MBps) [2024-12-09T14:54:01.573Z] Copying: 158/256 [MB] (17 MBps) [2024-12-09T14:54:02.516Z] Copying: 170/256 [MB] (11 MBps) [2024-12-09T14:54:03.903Z] Copying: 183/256 [MB] (12 MBps) [2024-12-09T14:54:04.476Z] Copying: 195/256 [MB] (12 MBps) [2024-12-09T14:54:05.862Z] Copying: 205/256 [MB] (10 MBps) [2024-12-09T14:54:06.807Z] Copying: 216/256 [MB] (10 MBps) [2024-12-09T14:54:07.753Z] Copying: 226/256 [MB] (10 MBps) [2024-12-09T14:54:08.697Z] Copying: 242112/262144 [kB] (9912 kBps) [2024-12-09T14:54:09.269Z] Copying: 247/256 [MB] (11 MBps) [2024-12-09T14:54:09.269Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-09 14:54:09.168186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.147 [2024-12-09 14:54:09.175966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.175996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.147 [2024-12-09 14:54:09.176008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:31.147 [2024-12-09 14:54:09.176022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.176039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:31.147 [2024-12-09 14:54:09.178257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.178280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.147 [2024-12-09 14:54:09.178289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:21:31.147 [2024-12-09 14:54:09.178296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.180686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.180716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.147 [2024-12-09 14:54:09.180725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.372 ms 00:21:31.147 [2024-12-09 14:54:09.180731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.187259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.187291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.147 [2024-12-09 14:54:09.187298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.514 ms 00:21:31.147 [2024-12-09 14:54:09.187304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.192589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.192613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.147 [2024-12-09 14:54:09.192621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.251 ms 00:21:31.147 [2024-12-09 14:54:09.192628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.211095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.211120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.147 [2024-12-09 14:54:09.211129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.433 ms 00:21:31.147 [2024-12-09 14:54:09.211135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.223407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.223436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.147 [2024-12-09 14:54:09.223448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.243 ms 00:21:31.147 [2024-12-09 14:54:09.223454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.223551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.223558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.147 [2024-12-09 14:54:09.223565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:31.147 [2024-12-09 14:54:09.223577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.241914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.242033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.147 [2024-12-09 14:54:09.242046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.325 ms 00:21:31.147 [2024-12-09 14:54:09.242051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.147 [2024-12-09 14:54:09.259949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.147 [2024-12-09 14:54:09.259972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.147 [2024-12-09 14:54:09.259980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.872 ms 00:21:31.147 [2024-12-09 14:54:09.259986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.409 [2024-12-09 14:54:09.277221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.409 [2024-12-09 14:54:09.277322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.409 [2024-12-09 14:54:09.277334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.208 ms 00:21:31.409 [2024-12-09 14:54:09.277339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.409 [2024-12-09 14:54:09.294753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.409 [2024-12-09 14:54:09.294776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.409 [2024-12-09 14:54:09.294783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.368 ms 00:21:31.409 [2024-12-09 14:54:09.294788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.409 [2024-12-09 14:54:09.294824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.409 [2024-12-09 14:54:09.294837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.409 [2024-12-09 14:54:09.294960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.294967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.294973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.294980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.294986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.294992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.294999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.410 [2024-12-09 14:54:09.295449] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.410 [2024-12-09 14:54:09.295455] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cea04ff-c544-4eb1-8911-42d62e850592 00:21:31.410 [2024-12-09 14:54:09.295462] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:31.410 [2024-12-09 14:54:09.295468] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:31.410 [2024-12-09 14:54:09.295474] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:31.410 [2024-12-09 14:54:09.295480] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:31.410 [2024-12-09 14:54:09.295486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.410 [2024-12-09 14:54:09.295492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.410 [2024-12-09 14:54:09.295498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.410 [2024-12-09 14:54:09.295503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.410 [2024-12-09 14:54:09.295508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.410 [2024-12-09 14:54:09.295513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.410 [2024-12-09 14:54:09.295521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.410 [2024-12-09 14:54:09.295527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:21:31.411 [2024-12-09 14:54:09.295533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.305615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.411 [2024-12-09 14:54:09.305638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.411 [2024-12-09 14:54:09.305646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.069 ms 00:21:31.411 [2024-12-09 14:54:09.305651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.305970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.411 [2024-12-09 14:54:09.305980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.411 [2024-12-09 14:54:09.305987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:21:31.411 [2024-12-09 14:54:09.305992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.335149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.335174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.411 [2024-12-09 14:54:09.335182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.335189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.335275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.335283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.411 [2024-12-09 14:54:09.335289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.335295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.335327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.335335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.411 [2024-12-09 14:54:09.335341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.335346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.335359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.335368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.411 [2024-12-09 14:54:09.335374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.335379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.398494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.398526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.411 [2024-12-09 14:54:09.398535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.398541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.450311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.411 [2024-12-09 14:54:09.450320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.450327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.450395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.411 [2024-12-09 14:54:09.450401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.450407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.450439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.411 [2024-12-09 14:54:09.450449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.450455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.450544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.411 [2024-12-09 14:54:09.450550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.450556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.450589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.411 [2024-12-09 14:54:09.450596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.450604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.450648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.411 [2024-12-09 14:54:09.450656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.450662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.411 [2024-12-09 14:54:09.450709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.411 [2024-12-09 14:54:09.450718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.411 [2024-12-09 14:54:09.450724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.411 [2024-12-09 14:54:09.450878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.884 ms, result 0 00:21:31.982 00:21:31.982 00:21:31.982 14:54:10 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78175 00:21:31.982 14:54:10 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:31.982 14:54:10 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78175 00:21:31.982 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78175 ']' 00:21:31.982 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.982 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.982 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.982 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.982 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:32.243 [2024-12-09 14:54:10.136422] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:21:32.243 [2024-12-09 14:54:10.136690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78175 ] 00:21:32.243 [2024-12-09 14:54:10.291922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.504 [2024-12-09 14:54:10.393892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.076 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.076 14:54:10 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:33.076 14:54:10 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:33.076 [2024-12-09 14:54:11.167612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:33.076 [2024-12-09 14:54:11.167671] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:33.337 [2024-12-09 14:54:11.340270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.337 [2024-12-09 14:54:11.340306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:33.337 [2024-12-09 14:54:11.340321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:33.337 [2024-12-09 14:54:11.340328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.337 [2024-12-09 14:54:11.342521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.337 [2024-12-09 14:54:11.342552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.337 [2024-12-09 14:54:11.342561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:21:33.337 [2024-12-09 14:54:11.342567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.337 [2024-12-09 14:54:11.342632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:33.337 [2024-12-09 14:54:11.343254] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:33.337 [2024-12-09 14:54:11.343288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.337 [2024-12-09 14:54:11.343295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.337 [2024-12-09 14:54:11.343303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:21:33.337 [2024-12-09 14:54:11.343309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.337 [2024-12-09 14:54:11.344602] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:33.337 [2024-12-09 14:54:11.354822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.337 [2024-12-09 14:54:11.354855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:33.337 [2024-12-09 14:54:11.354864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.224 ms 00:21:33.337 [2024-12-09 14:54:11.354873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.337 [2024-12-09 14:54:11.354941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.337 [2024-12-09 14:54:11.354951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:33.337 [2024-12-09 14:54:11.354959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:33.338 [2024-12-09 14:54:11.354966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.361169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.338 [2024-12-09 14:54:11.361199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.338 [2024-12-09 14:54:11.361207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.161 ms 00:21:33.338 [2024-12-09 14:54:11.361215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.361293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.338 [2024-12-09 14:54:11.361303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.338 [2024-12-09 14:54:11.361310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:33.338 [2024-12-09 14:54:11.361320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.361341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.338 [2024-12-09 14:54:11.361350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:33.338 [2024-12-09 14:54:11.361356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:33.338 [2024-12-09 14:54:11.361365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.361384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:33.338 [2024-12-09 14:54:11.364445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.338 [2024-12-09 14:54:11.364467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.338 [2024-12-09 14:54:11.364477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:21:33.338 [2024-12-09 14:54:11.364483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.364516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.338 [2024-12-09 14:54:11.364523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:33.338 [2024-12-09 14:54:11.364531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:33.338 [2024-12-09 14:54:11.364538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.364555] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:33.338 [2024-12-09 14:54:11.364572] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:33.338 [2024-12-09 14:54:11.364608] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:33.338 [2024-12-09 14:54:11.364622] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:33.338 [2024-12-09 14:54:11.364708] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:33.338 [2024-12-09 14:54:11.364717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:33.338 [2024-12-09 14:54:11.364729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:33.338 [2024-12-09 14:54:11.364737] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:33.338 [2024-12-09 14:54:11.364747] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:33.338 [2024-12-09 14:54:11.364754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:33.338 [2024-12-09 14:54:11.364761] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:33.338 [2024-12-09 14:54:11.364767] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:33.338 [2024-12-09 14:54:11.364776] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:33.338 [2024-12-09 14:54:11.364782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.338 [2024-12-09 14:54:11.364789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:33.338 [2024-12-09 14:54:11.364796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:21:33.338 [2024-12-09 14:54:11.364818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.364887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.338 [2024-12-09 14:54:11.364896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:33.338 [2024-12-09 14:54:11.364902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:33.338 [2024-12-09 14:54:11.364909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.338 [2024-12-09 14:54:11.364998] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:33.338 [2024-12-09 14:54:11.365009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:33.338 [2024-12-09 14:54:11.365016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:33.338 [2024-12-09 14:54:11.365039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:33.338 [2024-12-09 14:54:11.365059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:33.338 [2024-12-09 14:54:11.365073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:33.338 [2024-12-09 14:54:11.365079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:33.338 [2024-12-09 14:54:11.365084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:33.338 [2024-12-09 14:54:11.365091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:33.338 [2024-12-09 14:54:11.365098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:33.338 [2024-12-09 14:54:11.365105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:33.338 [2024-12-09 14:54:11.365117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:33.338 [2024-12-09 14:54:11.365138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:33.338 [2024-12-09 14:54:11.365158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:33.338 [2024-12-09 14:54:11.365175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:33.338 [2024-12-09 14:54:11.365193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:33.338 [2024-12-09 14:54:11.365217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:33.338 [2024-12-09 14:54:11.365228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:33.338 [2024-12-09 14:54:11.365235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:33.338 [2024-12-09 14:54:11.365240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:33.338 [2024-12-09 14:54:11.365246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:33.338 [2024-12-09 14:54:11.365251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:33.338 [2024-12-09 14:54:11.365259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:33.338 [2024-12-09 14:54:11.365270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:33.338 [2024-12-09 14:54:11.365275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365282] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:33.338 [2024-12-09 14:54:11.365289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:33.338 [2024-12-09 14:54:11.365296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.338 [2024-12-09 14:54:11.365311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:33.338 [2024-12-09 14:54:11.365317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:33.338 [2024-12-09 14:54:11.365324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:33.338 [2024-12-09 14:54:11.365329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:33.338 [2024-12-09 14:54:11.365335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:33.338 [2024-12-09 14:54:11.365341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:33.338 [2024-12-09 14:54:11.365350] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:33.338 [2024-12-09 14:54:11.365356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:33.338 [2024-12-09 14:54:11.365367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:33.338 [2024-12-09 14:54:11.365373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:33.338 [2024-12-09 14:54:11.365380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:33.338 [2024-12-09 14:54:11.365385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:33.338 [2024-12-09 14:54:11.365392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:33.338 [2024-12-09 14:54:11.365397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:33.338 [2024-12-09 14:54:11.365404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:33.339 [2024-12-09 14:54:11.365409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:33.339 [2024-12-09 14:54:11.365416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:33.339 [2024-12-09 14:54:11.365422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:33.339 [2024-12-09 14:54:11.365429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:33.339 [2024-12-09 14:54:11.365434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:33.339 [2024-12-09 14:54:11.365440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:33.339 [2024-12-09 14:54:11.365446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:33.339 [2024-12-09 14:54:11.365452] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:33.339 [2024-12-09 14:54:11.365459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:33.339 [2024-12-09 14:54:11.365469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:33.339 [2024-12-09 14:54:11.365475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:33.339 [2024-12-09 14:54:11.365483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:33.339 [2024-12-09 14:54:11.365488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:33.339 [2024-12-09 14:54:11.365496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.365502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:33.339 [2024-12-09 14:54:11.365510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:21:33.339 [2024-12-09 14:54:11.365519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.389624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.389652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.339 [2024-12-09 14:54:11.389663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.048 ms 00:21:33.339 [2024-12-09 14:54:11.389671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.389767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.389774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:33.339 [2024-12-09 14:54:11.389783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:33.339 [2024-12-09 14:54:11.389789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.416266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.416294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.339 [2024-12-09 14:54:11.416304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.446 ms 00:21:33.339 [2024-12-09 14:54:11.416310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.416356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.416364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.339 [2024-12-09 14:54:11.416372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:33.339 [2024-12-09 14:54:11.416377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.416760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.416772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.339 [2024-12-09 14:54:11.416783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:21:33.339 [2024-12-09 14:54:11.416789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.416916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.416925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.339 [2024-12-09 14:54:11.416953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:33.339 [2024-12-09 14:54:11.416961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.430260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.430284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.339 [2024-12-09 14:54:11.430294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.281 ms 00:21:33.339 [2024-12-09 14:54:11.430300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.339 [2024-12-09 14:54:11.453386] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:33.339 [2024-12-09 14:54:11.453435] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:33.339 [2024-12-09 14:54:11.453457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.339 [2024-12-09 14:54:11.453470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:33.339 [2024-12-09 14:54:11.453485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.077 ms 00:21:33.339 [2024-12-09 14:54:11.453502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.473216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.473356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:33.601 [2024-12-09 14:54:11.473373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.623 ms 00:21:33.601 [2024-12-09 14:54:11.473380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.482645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.482670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:33.601 [2024-12-09 14:54:11.482681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.205 ms 00:21:33.601 [2024-12-09 14:54:11.482686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.491629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.491654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:33.601 [2024-12-09 14:54:11.491663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.900 ms 00:21:33.601 [2024-12-09 14:54:11.491669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.492177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.492204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:33.601 [2024-12-09 14:54:11.492213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:21:33.601 [2024-12-09 14:54:11.492219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.552020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.552075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:33.601 [2024-12-09 14:54:11.552091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.778 ms 00:21:33.601 [2024-12-09 14:54:11.552100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.562283] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:33.601 [2024-12-09 14:54:11.575810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.575848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:33.601 [2024-12-09 14:54:11.575862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.606 ms 00:21:33.601 [2024-12-09 14:54:11.575872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.575943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.575955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:33.601 [2024-12-09 14:54:11.575963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:33.601 [2024-12-09 14:54:11.575973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.576020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.576031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:33.601 [2024-12-09 14:54:11.576039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:33.601 [2024-12-09 14:54:11.576050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.576073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.576082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:33.601 [2024-12-09 14:54:11.576090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:33.601 [2024-12-09 14:54:11.576101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.576133] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:33.601 [2024-12-09 14:54:11.576145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.576154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:33.601 [2024-12-09 14:54:11.576163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:33.601 [2024-12-09 14:54:11.576171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.599717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.599858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:33.601 [2024-12-09 14:54:11.599879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.519 ms 00:21:33.601 [2024-12-09 14:54:11.599888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.599971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.601 [2024-12-09 14:54:11.599982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:33.601 [2024-12-09 14:54:11.599991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:33.601 [2024-12-09 14:54:11.600001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.601 [2024-12-09 14:54:11.600778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:33.601 [2024-12-09 14:54:11.603751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.227 ms, result 0 00:21:33.601 [2024-12-09 14:54:11.605855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:33.601 Some configs were skipped because the RPC state that can call them passed over. 00:21:33.601 14:54:11 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:33.863 [2024-12-09 14:54:11.845565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.863 [2024-12-09 14:54:11.845714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:33.863 [2024-12-09 14:54:11.845772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:21:33.863 [2024-12-09 14:54:11.845798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.863 [2024-12-09 14:54:11.845860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.991 ms, result 0 00:21:33.863 true 00:21:33.863 14:54:11 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:34.123 [2024-12-09 14:54:12.057932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.123 [2024-12-09 14:54:12.058101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:34.123 [2024-12-09 14:54:12.058128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:21:34.123 [2024-12-09 14:54:12.058138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.123 [2024-12-09 14:54:12.058185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.991 ms, result 0 00:21:34.123 true 00:21:34.123 14:54:12 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78175 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78175 ']' 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78175 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78175 00:21:34.123 killing process with pid 78175 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78175' 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78175 00:21:34.123 14:54:12 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78175 00:21:34.691 [2024-12-09 14:54:12.672081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.672128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:34.691 [2024-12-09 14:54:12.672138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:34.691 [2024-12-09 14:54:12.672146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.672164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:34.691 [2024-12-09 14:54:12.674248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.674271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:34.691 [2024-12-09 14:54:12.674282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:21:34.691 [2024-12-09 14:54:12.674288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.674525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.674533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:34.691 [2024-12-09 14:54:12.674541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:21:34.691 [2024-12-09 14:54:12.674547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.677544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.677568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:34.691 [2024-12-09 14:54:12.677579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:21:34.691 [2024-12-09 14:54:12.677584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.682756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.682778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:34.691 [2024-12-09 14:54:12.682789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.143 ms 00:21:34.691 [2024-12-09 14:54:12.682796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.690084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.690213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:34.691 [2024-12-09 14:54:12.690229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.220 ms 00:21:34.691 [2024-12-09 14:54:12.690235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.697137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.697235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:34.691 [2024-12-09 14:54:12.697249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.872 ms 00:21:34.691 [2024-12-09 14:54:12.697255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.697360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.697368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:34.691 [2024-12-09 14:54:12.697376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:34.691 [2024-12-09 14:54:12.697382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.705068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.705092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:34.691 [2024-12-09 14:54:12.705100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.670 ms 00:21:34.691 [2024-12-09 14:54:12.705105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.712770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.712878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:34.691 [2024-12-09 14:54:12.712896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.636 ms 00:21:34.691 [2024-12-09 14:54:12.712901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.719911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.719998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:34.691 [2024-12-09 14:54:12.720047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:21:34.691 [2024-12-09 14:54:12.720064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.727026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.691 [2024-12-09 14:54:12.727111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:34.691 [2024-12-09 14:54:12.727159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:21:34.691 [2024-12-09 14:54:12.727176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.691 [2024-12-09 14:54:12.727217] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:34.691 [2024-12-09 14:54:12.727270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.727993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.728016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:34.691 [2024-12-09 14:54:12.728038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.728947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.729990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:34.692 [2024-12-09 14:54:12.730620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:34.692 [2024-12-09 14:54:12.730685] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cea04ff-c544-4eb1-8911-42d62e850592 00:21:34.692 [2024-12-09 14:54:12.730739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:34.692 [2024-12-09 14:54:12.730754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:34.692 [2024-12-09 14:54:12.730769] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:34.692 [2024-12-09 14:54:12.730785] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:34.692 [2024-12-09 14:54:12.730807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:34.692 [2024-12-09 14:54:12.730824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:34.692 [2024-12-09 14:54:12.730840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:34.692 [2024-12-09 14:54:12.730864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:34.692 [2024-12-09 14:54:12.730878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:34.692 [2024-12-09 14:54:12.730928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.692 [2024-12-09 14:54:12.730945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:34.692 [2024-12-09 14:54:12.730962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:21:34.692 [2024-12-09 14:54:12.730977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.692 [2024-12-09 14:54:12.740460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.692 [2024-12-09 14:54:12.740537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:34.692 [2024-12-09 14:54:12.740581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.453 ms 00:21:34.692 [2024-12-09 14:54:12.740617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.692 [2024-12-09 14:54:12.740931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.693 [2024-12-09 14:54:12.740984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:34.693 [2024-12-09 14:54:12.741032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:21:34.693 [2024-12-09 14:54:12.741050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.693 [2024-12-09 14:54:12.776230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.693 [2024-12-09 14:54:12.776318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.693 [2024-12-09 14:54:12.776359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.693 [2024-12-09 14:54:12.776378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.693 [2024-12-09 14:54:12.776464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.693 [2024-12-09 14:54:12.776626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.693 [2024-12-09 14:54:12.776678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.693 [2024-12-09 14:54:12.776696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.693 [2024-12-09 14:54:12.776742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.693 [2024-12-09 14:54:12.776760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.693 [2024-12-09 14:54:12.776778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.693 [2024-12-09 14:54:12.776792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.693 [2024-12-09 14:54:12.776915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.693 [2024-12-09 14:54:12.776938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.693 [2024-12-09 14:54:12.776957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.693 [2024-12-09 14:54:12.776973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.836617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.836729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.952 [2024-12-09 14:54:12.836769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.836787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.884822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.884932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.952 [2024-12-09 14:54:12.884972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.884991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.885064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.885083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.952 [2024-12-09 14:54:12.885101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.885116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.885150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.885206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.952 [2024-12-09 14:54:12.885226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.885240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.885324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.885342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.952 [2024-12-09 14:54:12.885358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.885438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.885481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.885490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:34.952 [2024-12-09 14:54:12.885497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.885503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.885535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.885543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.952 [2024-12-09 14:54:12.885551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.885557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.885592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.952 [2024-12-09 14:54:12.885599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.952 [2024-12-09 14:54:12.885607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.952 [2024-12-09 14:54:12.885612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.952 [2024-12-09 14:54:12.885719] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 213.618 ms, result 0 00:21:35.519 14:54:13 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:35.519 14:54:13 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:35.519 [2024-12-09 14:54:13.468895] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:21:35.519 [2024-12-09 14:54:13.469021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78228 ] 00:21:35.519 [2024-12-09 14:54:13.625048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.777 [2024-12-09 14:54:13.700404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.037 [2024-12-09 14:54:13.908475] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:36.037 [2024-12-09 14:54:13.908527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:36.037 [2024-12-09 14:54:14.056142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.056178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:36.037 [2024-12-09 14:54:14.056189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:36.037 [2024-12-09 14:54:14.056195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.058246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.058369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:36.037 [2024-12-09 14:54:14.058382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:21:36.037 [2024-12-09 14:54:14.058388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.058443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:36.037 [2024-12-09 14:54:14.058997] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:36.037 [2024-12-09 14:54:14.059015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.059021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:36.037 [2024-12-09 14:54:14.059028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:21:36.037 [2024-12-09 14:54:14.059034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.060154] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:36.037 [2024-12-09 14:54:14.069584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.069695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:36.037 [2024-12-09 14:54:14.069708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.432 ms 00:21:36.037 [2024-12-09 14:54:14.069713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.069771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.069780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:36.037 [2024-12-09 14:54:14.069786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:36.037 [2024-12-09 14:54:14.069792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.074019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.074041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:36.037 [2024-12-09 14:54:14.074048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.174 ms 00:21:36.037 [2024-12-09 14:54:14.074054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.074125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.074132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:36.037 [2024-12-09 14:54:14.074138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:36.037 [2024-12-09 14:54:14.074144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.074161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.074168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:36.037 [2024-12-09 14:54:14.074173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:36.037 [2024-12-09 14:54:14.074179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.074196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:36.037 [2024-12-09 14:54:14.076724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.076836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:36.037 [2024-12-09 14:54:14.076848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:21:36.037 [2024-12-09 14:54:14.076855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.076887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.037 [2024-12-09 14:54:14.076894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:36.037 [2024-12-09 14:54:14.076900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:36.037 [2024-12-09 14:54:14.076905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.037 [2024-12-09 14:54:14.076920] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:36.037 [2024-12-09 14:54:14.076934] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:36.037 [2024-12-09 14:54:14.076960] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:36.037 [2024-12-09 14:54:14.076971] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:36.037 [2024-12-09 14:54:14.077050] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:36.038 [2024-12-09 14:54:14.077058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:36.038 [2024-12-09 14:54:14.077066] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:36.038 [2024-12-09 14:54:14.077075] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077082] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077088] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:36.038 [2024-12-09 14:54:14.077094] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:36.038 [2024-12-09 14:54:14.077099] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:36.038 [2024-12-09 14:54:14.077104] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:36.038 [2024-12-09 14:54:14.077110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.038 [2024-12-09 14:54:14.077115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:36.038 [2024-12-09 14:54:14.077121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:21:36.038 [2024-12-09 14:54:14.077126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.038 [2024-12-09 14:54:14.077192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.038 [2024-12-09 14:54:14.077200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:36.038 [2024-12-09 14:54:14.077206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:36.038 [2024-12-09 14:54:14.077211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.038 [2024-12-09 14:54:14.077285] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:36.038 [2024-12-09 14:54:14.077292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:36.038 [2024-12-09 14:54:14.077297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:36.038 [2024-12-09 14:54:14.077314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:36.038 [2024-12-09 14:54:14.077330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:36.038 [2024-12-09 14:54:14.077340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:36.038 [2024-12-09 14:54:14.077349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:36.038 [2024-12-09 14:54:14.077355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:36.038 [2024-12-09 14:54:14.077360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:36.038 [2024-12-09 14:54:14.077365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:36.038 [2024-12-09 14:54:14.077370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:36.038 [2024-12-09 14:54:14.077380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:36.038 [2024-12-09 14:54:14.077396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:36.038 [2024-12-09 14:54:14.077411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:36.038 [2024-12-09 14:54:14.077426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:36.038 [2024-12-09 14:54:14.077440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:36.038 [2024-12-09 14:54:14.077455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:36.038 [2024-12-09 14:54:14.077465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:36.038 [2024-12-09 14:54:14.077471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:36.038 [2024-12-09 14:54:14.077475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:36.038 [2024-12-09 14:54:14.077480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:36.038 [2024-12-09 14:54:14.077486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:36.038 [2024-12-09 14:54:14.077491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:36.038 [2024-12-09 14:54:14.077501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:36.038 [2024-12-09 14:54:14.077505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077511] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:36.038 [2024-12-09 14:54:14.077517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:36.038 [2024-12-09 14:54:14.077524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:36.038 [2024-12-09 14:54:14.077535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:36.038 [2024-12-09 14:54:14.077539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:36.038 [2024-12-09 14:54:14.077546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:36.038 [2024-12-09 14:54:14.077551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:36.038 [2024-12-09 14:54:14.077556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:36.038 [2024-12-09 14:54:14.077561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:36.038 [2024-12-09 14:54:14.077567] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:36.038 [2024-12-09 14:54:14.077574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:36.038 [2024-12-09 14:54:14.077580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:36.038 [2024-12-09 14:54:14.077585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:36.038 [2024-12-09 14:54:14.077591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:36.038 [2024-12-09 14:54:14.077597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:36.038 [2024-12-09 14:54:14.077602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:36.038 [2024-12-09 14:54:14.077607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:36.038 [2024-12-09 14:54:14.077613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:36.038 [2024-12-09 14:54:14.077618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:36.038 [2024-12-09 14:54:14.077624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:36.038 [2024-12-09 14:54:14.077629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:36.038 [2024-12-09 14:54:14.077634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:36.038 [2024-12-09 14:54:14.077639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:36.038 [2024-12-09 14:54:14.077645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:36.038 [2024-12-09 14:54:14.077650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:36.038 [2024-12-09 14:54:14.077656] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:36.038 [2024-12-09 14:54:14.077662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:36.038 [2024-12-09 14:54:14.077668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:36.038 [2024-12-09 14:54:14.077674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:36.038 [2024-12-09 14:54:14.077679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:36.038 [2024-12-09 14:54:14.077685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:36.038 [2024-12-09 14:54:14.077690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.038 [2024-12-09 14:54:14.077698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:36.038 [2024-12-09 14:54:14.077703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:21:36.038 [2024-12-09 14:54:14.077709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.038 [2024-12-09 14:54:14.098448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.038 [2024-12-09 14:54:14.098475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:36.038 [2024-12-09 14:54:14.098484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.698 ms 00:21:36.038 [2024-12-09 14:54:14.098489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.038 [2024-12-09 14:54:14.098582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.038 [2024-12-09 14:54:14.098590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:36.039 [2024-12-09 14:54:14.098596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:36.039 [2024-12-09 14:54:14.098602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.039 [2024-12-09 14:54:14.137284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.039 [2024-12-09 14:54:14.137391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:36.039 [2024-12-09 14:54:14.137408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.665 ms 00:21:36.039 [2024-12-09 14:54:14.137415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.039 [2024-12-09 14:54:14.137473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.039 [2024-12-09 14:54:14.137481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:36.039 [2024-12-09 14:54:14.137488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:36.039 [2024-12-09 14:54:14.137494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.039 [2024-12-09 14:54:14.137786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.039 [2024-12-09 14:54:14.137798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:36.039 [2024-12-09 14:54:14.137823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:36.039 [2024-12-09 14:54:14.137833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.039 [2024-12-09 14:54:14.137936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.039 [2024-12-09 14:54:14.137944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:36.039 [2024-12-09 14:54:14.137950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:36.039 [2024-12-09 14:54:14.137955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.039 [2024-12-09 14:54:14.148622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.039 [2024-12-09 14:54:14.148717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:36.039 [2024-12-09 14:54:14.148729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.651 ms 00:21:36.039 [2024-12-09 14:54:14.148735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.158396] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:36.299 [2024-12-09 14:54:14.158423] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:36.299 [2024-12-09 14:54:14.158432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.158439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:36.299 [2024-12-09 14:54:14.158445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.597 ms 00:21:36.299 [2024-12-09 14:54:14.158451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.176703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.176730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:36.299 [2024-12-09 14:54:14.176739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.207 ms 00:21:36.299 [2024-12-09 14:54:14.176745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.185386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.185411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:36.299 [2024-12-09 14:54:14.185419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.589 ms 00:21:36.299 [2024-12-09 14:54:14.185425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.194017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.194041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:36.299 [2024-12-09 14:54:14.194050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.554 ms 00:21:36.299 [2024-12-09 14:54:14.194056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.194515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.194535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:36.299 [2024-12-09 14:54:14.194542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:21:36.299 [2024-12-09 14:54:14.194548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.237788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.237828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:36.299 [2024-12-09 14:54:14.237838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.224 ms 00:21:36.299 [2024-12-09 14:54:14.237845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.247583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:36.299 [2024-12-09 14:54:14.258860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.258888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:36.299 [2024-12-09 14:54:14.258897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.999 ms 00:21:36.299 [2024-12-09 14:54:14.258907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.258978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.258987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:36.299 [2024-12-09 14:54:14.258993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:36.299 [2024-12-09 14:54:14.258999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.259033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.259040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:36.299 [2024-12-09 14:54:14.259047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:36.299 [2024-12-09 14:54:14.259054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.259079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.259086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:36.299 [2024-12-09 14:54:14.259092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:36.299 [2024-12-09 14:54:14.259098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.259121] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:36.299 [2024-12-09 14:54:14.259129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.299 [2024-12-09 14:54:14.259135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:36.299 [2024-12-09 14:54:14.259141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:36.299 [2024-12-09 14:54:14.259146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.299 [2024-12-09 14:54:14.277233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.300 [2024-12-09 14:54:14.277259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:36.300 [2024-12-09 14:54:14.277268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.071 ms 00:21:36.300 [2024-12-09 14:54:14.277274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.300 [2024-12-09 14:54:14.277340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.300 [2024-12-09 14:54:14.277348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:36.300 [2024-12-09 14:54:14.277355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:36.300 [2024-12-09 14:54:14.277360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.300 [2024-12-09 14:54:14.277996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:36.300 [2024-12-09 14:54:14.280298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.609 ms, result 0 00:21:36.300 [2024-12-09 14:54:14.280943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:36.300 [2024-12-09 14:54:14.295540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.243  [2024-12-09T14:54:16.304Z] Copying: 17/256 [MB] (17 MBps) [2024-12-09T14:54:17.688Z] Copying: 41/256 [MB] (23 MBps) [2024-12-09T14:54:18.629Z] Copying: 56/256 [MB] (15 MBps) [2024-12-09T14:54:19.570Z] Copying: 72/256 [MB] (15 MBps) [2024-12-09T14:54:20.510Z] Copying: 92/256 [MB] (20 MBps) [2024-12-09T14:54:21.449Z] Copying: 114/256 [MB] (21 MBps) [2024-12-09T14:54:22.391Z] Copying: 133/256 [MB] (19 MBps) [2024-12-09T14:54:23.332Z] Copying: 156/256 [MB] (22 MBps) [2024-12-09T14:54:24.719Z] Copying: 173/256 [MB] (16 MBps) [2024-12-09T14:54:25.301Z] Copying: 192/256 [MB] (18 MBps) [2024-12-09T14:54:26.343Z] Copying: 210/256 [MB] (18 MBps) [2024-12-09T14:54:27.727Z] Copying: 223/256 [MB] (13 MBps) [2024-12-09T14:54:27.727Z] Copying: 250/256 [MB] (26 MBps) [2024-12-09T14:54:27.727Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-09 14:54:27.580068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:49.605 [2024-12-09 14:54:27.590377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.605 [2024-12-09 14:54:27.590425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:49.605 [2024-12-09 14:54:27.590449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:49.606 [2024-12-09 14:54:27.590459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.590483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:49.606 [2024-12-09 14:54:27.593432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.593469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:49.606 [2024-12-09 14:54:27.593481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:21:49.606 [2024-12-09 14:54:27.593491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.593756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.593767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:49.606 [2024-12-09 14:54:27.593777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:21:49.606 [2024-12-09 14:54:27.593785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.597881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.597902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:49.606 [2024-12-09 14:54:27.597911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.064 ms 00:21:49.606 [2024-12-09 14:54:27.597919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.604932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.604970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:49.606 [2024-12-09 14:54:27.604980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.994 ms 00:21:49.606 [2024-12-09 14:54:27.604989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.630628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.630675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:49.606 [2024-12-09 14:54:27.630688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.571 ms 00:21:49.606 [2024-12-09 14:54:27.630696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.647497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.647546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:49.606 [2024-12-09 14:54:27.647564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.752 ms 00:21:49.606 [2024-12-09 14:54:27.647572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.647726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.647738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:49.606 [2024-12-09 14:54:27.647757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:49.606 [2024-12-09 14:54:27.647765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.673312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.673361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:49.606 [2024-12-09 14:54:27.673372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.530 ms 00:21:49.606 [2024-12-09 14:54:27.673379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.698684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.698733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:49.606 [2024-12-09 14:54:27.698744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.259 ms 00:21:49.606 [2024-12-09 14:54:27.698751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.606 [2024-12-09 14:54:27.723455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.606 [2024-12-09 14:54:27.723501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:49.606 [2024-12-09 14:54:27.723512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.658 ms 00:21:49.606 [2024-12-09 14:54:27.723519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.867 [2024-12-09 14:54:27.748422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.867 [2024-12-09 14:54:27.748469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:49.867 [2024-12-09 14:54:27.748480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.829 ms 00:21:49.867 [2024-12-09 14:54:27.748487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.867 [2024-12-09 14:54:27.748530] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:49.867 [2024-12-09 14:54:27.748547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:49.867 [2024-12-09 14:54:27.748857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.748996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:49.868 [2024-12-09 14:54:27.749360] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:49.868 [2024-12-09 14:54:27.749368] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cea04ff-c544-4eb1-8911-42d62e850592 00:21:49.868 [2024-12-09 14:54:27.749377] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:49.868 [2024-12-09 14:54:27.749385] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:49.868 [2024-12-09 14:54:27.749392] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:49.868 [2024-12-09 14:54:27.749400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:49.868 [2024-12-09 14:54:27.749407] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:49.868 [2024-12-09 14:54:27.749415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:49.868 [2024-12-09 14:54:27.749425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:49.868 [2024-12-09 14:54:27.749432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:49.868 [2024-12-09 14:54:27.749443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:49.868 [2024-12-09 14:54:27.749451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.868 [2024-12-09 14:54:27.749458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:49.868 [2024-12-09 14:54:27.749467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:21:49.868 [2024-12-09 14:54:27.749474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.868 [2024-12-09 14:54:27.762706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.868 [2024-12-09 14:54:27.762744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:49.868 [2024-12-09 14:54:27.762755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.200 ms 00:21:49.868 [2024-12-09 14:54:27.762763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.868 [2024-12-09 14:54:27.763195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.868 [2024-12-09 14:54:27.763210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:49.868 [2024-12-09 14:54:27.763219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:21:49.868 [2024-12-09 14:54:27.763228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.868 [2024-12-09 14:54:27.801965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.868 [2024-12-09 14:54:27.802016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.868 [2024-12-09 14:54:27.802026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.868 [2024-12-09 14:54:27.802041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.868 [2024-12-09 14:54:27.802138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.868 [2024-12-09 14:54:27.802149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.868 [2024-12-09 14:54:27.802158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.868 [2024-12-09 14:54:27.802166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.868 [2024-12-09 14:54:27.802219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.868 [2024-12-09 14:54:27.802229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.868 [2024-12-09 14:54:27.802237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.868 [2024-12-09 14:54:27.802245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.868 [2024-12-09 14:54:27.802265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.868 [2024-12-09 14:54:27.802273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.868 [2024-12-09 14:54:27.802281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.868 [2024-12-09 14:54:27.802288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.868 [2024-12-09 14:54:27.887032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.887093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.869 [2024-12-09 14:54:27.887107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.887116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.956535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.956596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:49.869 [2024-12-09 14:54:27.956609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.956618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.956696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.956706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.869 [2024-12-09 14:54:27.956715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.956724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.956758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.956773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.869 [2024-12-09 14:54:27.956781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.956790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.956911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.956924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.869 [2024-12-09 14:54:27.956933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.956941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.956975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.956985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:49.869 [2024-12-09 14:54:27.956997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.957006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.957050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.957060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.869 [2024-12-09 14:54:27.957068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.957076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.957124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.869 [2024-12-09 14:54:27.957137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.869 [2024-12-09 14:54:27.957146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.869 [2024-12-09 14:54:27.957155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.869 [2024-12-09 14:54:27.957308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.919 ms, result 0 00:21:50.810 00:21:50.810 00:21:50.810 14:54:28 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:50.810 14:54:28 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:51.384 14:54:29 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:51.384 [2024-12-09 14:54:29.380944] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:21:51.385 [2024-12-09 14:54:29.381096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78395 ] 00:21:51.645 [2024-12-09 14:54:29.543178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.645 [2024-12-09 14:54:29.666081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.906 [2024-12-09 14:54:29.959482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:51.906 [2024-12-09 14:54:29.959567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.168 [2024-12-09 14:54:30.125296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.168 [2024-12-09 14:54:30.125359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:52.168 [2024-12-09 14:54:30.125374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:52.168 [2024-12-09 14:54:30.125383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.168 [2024-12-09 14:54:30.128352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.168 [2024-12-09 14:54:30.128405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:52.168 [2024-12-09 14:54:30.128416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:21:52.168 [2024-12-09 14:54:30.128425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.168 [2024-12-09 14:54:30.128547] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:52.168 [2024-12-09 14:54:30.129284] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:52.168 [2024-12-09 14:54:30.129326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.168 [2024-12-09 14:54:30.129335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:52.168 [2024-12-09 14:54:30.129345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:21:52.168 [2024-12-09 14:54:30.129353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.168 [2024-12-09 14:54:30.131061] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:52.168 [2024-12-09 14:54:30.145625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.168 [2024-12-09 14:54:30.145674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:52.168 [2024-12-09 14:54:30.145687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.566 ms 00:21:52.168 [2024-12-09 14:54:30.145696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.168 [2024-12-09 14:54:30.145822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.168 [2024-12-09 14:54:30.145837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:52.168 [2024-12-09 14:54:30.145847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:52.168 [2024-12-09 14:54:30.145856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.168 [2024-12-09 14:54:30.153619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.169 [2024-12-09 14:54:30.153659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:52.169 [2024-12-09 14:54:30.153669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.718 ms 00:21:52.169 [2024-12-09 14:54:30.153677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.169 [2024-12-09 14:54:30.153780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.169 [2024-12-09 14:54:30.153791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:52.169 [2024-12-09 14:54:30.153818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:52.169 [2024-12-09 14:54:30.153828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.169 [2024-12-09 14:54:30.153861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.169 [2024-12-09 14:54:30.153871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:52.169 [2024-12-09 14:54:30.153880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:52.169 [2024-12-09 14:54:30.153888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.169 [2024-12-09 14:54:30.153909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:52.169 [2024-12-09 14:54:30.158105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.169 [2024-12-09 14:54:30.158143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:52.169 [2024-12-09 14:54:30.158154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:21:52.169 [2024-12-09 14:54:30.158163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.169 [2024-12-09 14:54:30.158242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.169 [2024-12-09 14:54:30.158253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:52.169 [2024-12-09 14:54:30.158263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:52.169 [2024-12-09 14:54:30.158270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.169 [2024-12-09 14:54:30.158296] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:52.169 [2024-12-09 14:54:30.158318] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:52.169 [2024-12-09 14:54:30.158356] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:52.169 [2024-12-09 14:54:30.158371] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:52.169 [2024-12-09 14:54:30.158477] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:52.169 [2024-12-09 14:54:30.158489] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:52.169 [2024-12-09 14:54:30.158500] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:52.169 [2024-12-09 14:54:30.158513] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:52.169 [2024-12-09 14:54:30.158523] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:52.169 [2024-12-09 14:54:30.158532] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:52.169 [2024-12-09 14:54:30.158541] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:52.169 [2024-12-09 14:54:30.158549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:52.169 [2024-12-09 14:54:30.158557] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:52.169 [2024-12-09 14:54:30.158565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.169 [2024-12-09 14:54:30.158573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:52.169 [2024-12-09 14:54:30.158584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:21:52.169 [2024-12-09 14:54:30.158591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.169 [2024-12-09 14:54:30.158679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.169 [2024-12-09 14:54:30.158707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:52.169 [2024-12-09 14:54:30.158715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:52.169 [2024-12-09 14:54:30.158723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.169 [2024-12-09 14:54:30.158842] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:52.169 [2024-12-09 14:54:30.158877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:52.169 [2024-12-09 14:54:30.158887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.169 [2024-12-09 14:54:30.158895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.169 [2024-12-09 14:54:30.158903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:52.169 [2024-12-09 14:54:30.158911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:52.169 [2024-12-09 14:54:30.158919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:52.169 [2024-12-09 14:54:30.158925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:52.169 [2024-12-09 14:54:30.158932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:52.169 [2024-12-09 14:54:30.158939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.169 [2024-12-09 14:54:30.158947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:52.169 [2024-12-09 14:54:30.158960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:52.169 [2024-12-09 14:54:30.158968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.169 [2024-12-09 14:54:30.158975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:52.169 [2024-12-09 14:54:30.158983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:52.169 [2024-12-09 14:54:30.158993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:52.169 [2024-12-09 14:54:30.159007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:52.169 [2024-12-09 14:54:30.159015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:52.169 [2024-12-09 14:54:30.159029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.169 [2024-12-09 14:54:30.159045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:52.169 [2024-12-09 14:54:30.159052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.169 [2024-12-09 14:54:30.159066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:52.169 [2024-12-09 14:54:30.159072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.169 [2024-12-09 14:54:30.159086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:52.169 [2024-12-09 14:54:30.159094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.169 [2024-12-09 14:54:30.159107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:52.169 [2024-12-09 14:54:30.159114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.169 [2024-12-09 14:54:30.159127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:52.169 [2024-12-09 14:54:30.159134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:52.169 [2024-12-09 14:54:30.159141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.169 [2024-12-09 14:54:30.159148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:52.169 [2024-12-09 14:54:30.159155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:52.169 [2024-12-09 14:54:30.159161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:52.169 [2024-12-09 14:54:30.159173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:52.169 [2024-12-09 14:54:30.159180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159187] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:52.169 [2024-12-09 14:54:30.159196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:52.169 [2024-12-09 14:54:30.159206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.169 [2024-12-09 14:54:30.159214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.169 [2024-12-09 14:54:30.159223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:52.169 [2024-12-09 14:54:30.159230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:52.169 [2024-12-09 14:54:30.159237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:52.169 [2024-12-09 14:54:30.159244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:52.169 [2024-12-09 14:54:30.159252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:52.169 [2024-12-09 14:54:30.159259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:52.169 [2024-12-09 14:54:30.159268] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:52.169 [2024-12-09 14:54:30.159277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.169 [2024-12-09 14:54:30.159286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:52.169 [2024-12-09 14:54:30.159293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:52.169 [2024-12-09 14:54:30.159301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:52.169 [2024-12-09 14:54:30.159308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:52.169 [2024-12-09 14:54:30.159315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:52.169 [2024-12-09 14:54:30.159321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:52.169 [2024-12-09 14:54:30.159329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:52.170 [2024-12-09 14:54:30.159336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:52.170 [2024-12-09 14:54:30.159343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:52.170 [2024-12-09 14:54:30.159350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:52.170 [2024-12-09 14:54:30.159356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:52.170 [2024-12-09 14:54:30.159364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:52.170 [2024-12-09 14:54:30.159372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:52.170 [2024-12-09 14:54:30.159380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:52.170 [2024-12-09 14:54:30.159387] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:52.170 [2024-12-09 14:54:30.159395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.170 [2024-12-09 14:54:30.159403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:52.170 [2024-12-09 14:54:30.159410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:52.170 [2024-12-09 14:54:30.159419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:52.170 [2024-12-09 14:54:30.159425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:52.170 [2024-12-09 14:54:30.159434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.159448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:52.170 [2024-12-09 14:54:30.159456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:21:52.170 [2024-12-09 14:54:30.159464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.191221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.191272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.170 [2024-12-09 14:54:30.191284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.696 ms 00:21:52.170 [2024-12-09 14:54:30.191292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.191435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.191447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:52.170 [2024-12-09 14:54:30.191455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:52.170 [2024-12-09 14:54:30.191463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.242459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.242514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.170 [2024-12-09 14:54:30.242530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.972 ms 00:21:52.170 [2024-12-09 14:54:30.242539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.242645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.242659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.170 [2024-12-09 14:54:30.242669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:52.170 [2024-12-09 14:54:30.242677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.243291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.243330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.170 [2024-12-09 14:54:30.243350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:21:52.170 [2024-12-09 14:54:30.243358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.243512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.243523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.170 [2024-12-09 14:54:30.243532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:52.170 [2024-12-09 14:54:30.243540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.259564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.259610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.170 [2024-12-09 14:54:30.259622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.001 ms 00:21:52.170 [2024-12-09 14:54:30.259630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.170 [2024-12-09 14:54:30.273713] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:52.170 [2024-12-09 14:54:30.273761] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:52.170 [2024-12-09 14:54:30.273776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.170 [2024-12-09 14:54:30.273785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:52.170 [2024-12-09 14:54:30.273795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.038 ms 00:21:52.170 [2024-12-09 14:54:30.273812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.299598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.299648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:52.432 [2024-12-09 14:54:30.299660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.696 ms 00:21:52.432 [2024-12-09 14:54:30.299668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.312487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.312530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:52.432 [2024-12-09 14:54:30.312541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.729 ms 00:21:52.432 [2024-12-09 14:54:30.312549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.324833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.324877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:52.432 [2024-12-09 14:54:30.324889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:21:52.432 [2024-12-09 14:54:30.324897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.325545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.325578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:52.432 [2024-12-09 14:54:30.325589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:21:52.432 [2024-12-09 14:54:30.325597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.390270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.390326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:52.432 [2024-12-09 14:54:30.390342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.645 ms 00:21:52.432 [2024-12-09 14:54:30.390351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.401501] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:52.432 [2024-12-09 14:54:30.420110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.420164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:52.432 [2024-12-09 14:54:30.420178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.663 ms 00:21:52.432 [2024-12-09 14:54:30.420192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.420279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.420291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:52.432 [2024-12-09 14:54:30.420301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:52.432 [2024-12-09 14:54:30.420311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.420367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.420378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:52.432 [2024-12-09 14:54:30.420387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:52.432 [2024-12-09 14:54:30.420398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.420429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.420438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:52.432 [2024-12-09 14:54:30.420447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:52.432 [2024-12-09 14:54:30.420455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.420492] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:52.432 [2024-12-09 14:54:30.420502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.420511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:52.432 [2024-12-09 14:54:30.420519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:52.432 [2024-12-09 14:54:30.420527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.446451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.446502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:52.432 [2024-12-09 14:54:30.446516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.901 ms 00:21:52.432 [2024-12-09 14:54:30.446525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.446635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.432 [2024-12-09 14:54:30.446647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:52.432 [2024-12-09 14:54:30.446657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:52.432 [2024-12-09 14:54:30.446666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.432 [2024-12-09 14:54:30.447834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:52.432 [2024-12-09 14:54:30.451087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.190 ms, result 0 00:21:52.432 [2024-12-09 14:54:30.452338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.432 [2024-12-09 14:54:30.465690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:52.694  [2024-12-09T14:54:30.816Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-12-09 14:54:30.773116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.694 [2024-12-09 14:54:30.782219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.694 [2024-12-09 14:54:30.782262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:52.694 [2024-12-09 14:54:30.782281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:52.694 [2024-12-09 14:54:30.782290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.694 [2024-12-09 14:54:30.782313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:52.694 [2024-12-09 14:54:30.785231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.694 [2024-12-09 14:54:30.785267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:52.694 [2024-12-09 14:54:30.785278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:21:52.694 [2024-12-09 14:54:30.785287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.694 [2024-12-09 14:54:30.788261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.694 [2024-12-09 14:54:30.788304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:52.694 [2024-12-09 14:54:30.788315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:21:52.694 [2024-12-09 14:54:30.788323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.694 [2024-12-09 14:54:30.792770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.694 [2024-12-09 14:54:30.792818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:52.694 [2024-12-09 14:54:30.792829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.425 ms 00:21:52.694 [2024-12-09 14:54:30.792837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.694 [2024-12-09 14:54:30.799797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.694 [2024-12-09 14:54:30.799842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:52.694 [2024-12-09 14:54:30.799853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.928 ms 00:21:52.694 [2024-12-09 14:54:30.799861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.825101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.957 [2024-12-09 14:54:30.825147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:52.957 [2024-12-09 14:54:30.825159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.192 ms 00:21:52.957 [2024-12-09 14:54:30.825167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.841591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.957 [2024-12-09 14:54:30.841643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:52.957 [2024-12-09 14:54:30.841655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.378 ms 00:21:52.957 [2024-12-09 14:54:30.841663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.841832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.957 [2024-12-09 14:54:30.841845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:52.957 [2024-12-09 14:54:30.841864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:21:52.957 [2024-12-09 14:54:30.841872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.867321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.957 [2024-12-09 14:54:30.867363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:52.957 [2024-12-09 14:54:30.867375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.431 ms 00:21:52.957 [2024-12-09 14:54:30.867382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.891858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.957 [2024-12-09 14:54:30.891901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:52.957 [2024-12-09 14:54:30.891912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.430 ms 00:21:52.957 [2024-12-09 14:54:30.891918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.915845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.957 [2024-12-09 14:54:30.915887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:52.957 [2024-12-09 14:54:30.915898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.880 ms 00:21:52.957 [2024-12-09 14:54:30.915905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.940171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.957 [2024-12-09 14:54:30.940209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:52.957 [2024-12-09 14:54:30.940220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.194 ms 00:21:52.957 [2024-12-09 14:54:30.940227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.957 [2024-12-09 14:54:30.940271] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:52.957 [2024-12-09 14:54:30.940286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:52.957 [2024-12-09 14:54:30.940506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.940998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:52.958 [2024-12-09 14:54:30.941089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:52.958 [2024-12-09 14:54:30.941097] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cea04ff-c544-4eb1-8911-42d62e850592 00:21:52.958 [2024-12-09 14:54:30.941105] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:52.958 [2024-12-09 14:54:30.941113] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:52.958 [2024-12-09 14:54:30.941120] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:52.958 [2024-12-09 14:54:30.941129] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:52.958 [2024-12-09 14:54:30.941137] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:52.958 [2024-12-09 14:54:30.941145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:52.958 [2024-12-09 14:54:30.941157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:52.958 [2024-12-09 14:54:30.941164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:52.958 [2024-12-09 14:54:30.941170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:52.958 [2024-12-09 14:54:30.941178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.958 [2024-12-09 14:54:30.941186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:52.958 [2024-12-09 14:54:30.941195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:21:52.958 [2024-12-09 14:54:30.941203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.958 [2024-12-09 14:54:30.954432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.958 [2024-12-09 14:54:30.954472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:52.958 [2024-12-09 14:54:30.954483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.199 ms 00:21:52.958 [2024-12-09 14:54:30.954491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.958 [2024-12-09 14:54:30.954926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.958 [2024-12-09 14:54:30.954943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:52.958 [2024-12-09 14:54:30.954954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:21:52.958 [2024-12-09 14:54:30.954961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.958 [2024-12-09 14:54:30.993402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.958 [2024-12-09 14:54:30.993448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.958 [2024-12-09 14:54:30.993459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.959 [2024-12-09 14:54:30.993473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.959 [2024-12-09 14:54:30.993561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.959 [2024-12-09 14:54:30.993571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.959 [2024-12-09 14:54:30.993580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.959 [2024-12-09 14:54:30.993588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.959 [2024-12-09 14:54:30.993636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.959 [2024-12-09 14:54:30.993646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.959 [2024-12-09 14:54:30.993654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.959 [2024-12-09 14:54:30.993662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.959 [2024-12-09 14:54:30.993684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.959 [2024-12-09 14:54:30.993692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.959 [2024-12-09 14:54:30.993700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.959 [2024-12-09 14:54:30.993707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.077447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.077499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.220 [2024-12-09 14:54:31.077513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.077527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.145513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.145564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.220 [2024-12-09 14:54:31.145577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.145586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.145661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.145671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.220 [2024-12-09 14:54:31.145680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.145689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.145721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.145737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.220 [2024-12-09 14:54:31.145745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.145753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.145871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.145883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.220 [2024-12-09 14:54:31.145892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.145901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.145934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.145944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.220 [2024-12-09 14:54:31.145956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.145964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.146005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.146015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.220 [2024-12-09 14:54:31.146024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.146032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.146081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.220 [2024-12-09 14:54:31.146095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.220 [2024-12-09 14:54:31.146104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.220 [2024-12-09 14:54:31.146113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.220 [2024-12-09 14:54:31.146268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.031 ms, result 0 00:21:53.793 00:21:53.793 00:21:54.055 14:54:31 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78430 00:21:54.055 14:54:31 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78430 00:21:54.055 14:54:31 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:54.055 14:54:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78430 ']' 00:21:54.055 14:54:31 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.055 14:54:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.055 14:54:31 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.055 14:54:31 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.055 14:54:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:54.055 [2024-12-09 14:54:32.009636] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:21:54.055 [2024-12-09 14:54:32.009793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78430 ] 00:21:54.055 [2024-12-09 14:54:32.173853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.315 [2024-12-09 14:54:32.291950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.889 14:54:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.889 14:54:32 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:54.889 14:54:32 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:55.150 [2024-12-09 14:54:33.182629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.150 [2024-12-09 14:54:33.182709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.413 [2024-12-09 14:54:33.361209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.361268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:55.413 [2024-12-09 14:54:33.361285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:55.413 [2024-12-09 14:54:33.361293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.364391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.364440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.413 [2024-12-09 14:54:33.364453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.075 ms 00:21:55.413 [2024-12-09 14:54:33.364460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.364573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:55.413 [2024-12-09 14:54:33.365303] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:55.413 [2024-12-09 14:54:33.365336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.365345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.413 [2024-12-09 14:54:33.365357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:21:55.413 [2024-12-09 14:54:33.365365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.367143] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:55.413 [2024-12-09 14:54:33.381278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.381329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:55.413 [2024-12-09 14:54:33.381342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.139 ms 00:21:55.413 [2024-12-09 14:54:33.381352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.381460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.381474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:55.413 [2024-12-09 14:54:33.381486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:55.413 [2024-12-09 14:54:33.381496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.389336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.389385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.413 [2024-12-09 14:54:33.389396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.786 ms 00:21:55.413 [2024-12-09 14:54:33.389406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.389520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.389533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.413 [2024-12-09 14:54:33.389543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:55.413 [2024-12-09 14:54:33.389556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.389581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.389591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:55.413 [2024-12-09 14:54:33.389599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:55.413 [2024-12-09 14:54:33.389609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.389632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:55.413 [2024-12-09 14:54:33.393758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.393796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.413 [2024-12-09 14:54:33.393820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.129 ms 00:21:55.413 [2024-12-09 14:54:33.393828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.393907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.413 [2024-12-09 14:54:33.393918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:55.413 [2024-12-09 14:54:33.393929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:55.413 [2024-12-09 14:54:33.393940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.413 [2024-12-09 14:54:33.393964] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:55.413 [2024-12-09 14:54:33.393987] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:55.413 [2024-12-09 14:54:33.394033] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:55.413 [2024-12-09 14:54:33.394049] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:55.413 [2024-12-09 14:54:33.394162] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:55.413 [2024-12-09 14:54:33.394173] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:55.414 [2024-12-09 14:54:33.394189] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:55.414 [2024-12-09 14:54:33.394200] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394212] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394220] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:55.414 [2024-12-09 14:54:33.394230] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:55.414 [2024-12-09 14:54:33.394238] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:55.414 [2024-12-09 14:54:33.394249] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:55.414 [2024-12-09 14:54:33.394258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.414 [2024-12-09 14:54:33.394268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:55.414 [2024-12-09 14:54:33.394276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:21:55.414 [2024-12-09 14:54:33.394285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.414 [2024-12-09 14:54:33.394374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.414 [2024-12-09 14:54:33.394385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:55.414 [2024-12-09 14:54:33.394392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:55.414 [2024-12-09 14:54:33.394402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.414 [2024-12-09 14:54:33.394505] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:55.414 [2024-12-09 14:54:33.394528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:55.414 [2024-12-09 14:54:33.394536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:55.414 [2024-12-09 14:54:33.394567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:55.414 [2024-12-09 14:54:33.394592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.414 [2024-12-09 14:54:33.394610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:55.414 [2024-12-09 14:54:33.394619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:55.414 [2024-12-09 14:54:33.394626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.414 [2024-12-09 14:54:33.394635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:55.414 [2024-12-09 14:54:33.394641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:55.414 [2024-12-09 14:54:33.394650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:55.414 [2024-12-09 14:54:33.394669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:55.414 [2024-12-09 14:54:33.394697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:55.414 [2024-12-09 14:54:33.394722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:55.414 [2024-12-09 14:54:33.394744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:55.414 [2024-12-09 14:54:33.394769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:55.414 [2024-12-09 14:54:33.394791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.414 [2024-12-09 14:54:33.394821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:55.414 [2024-12-09 14:54:33.394830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:55.414 [2024-12-09 14:54:33.394836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.414 [2024-12-09 14:54:33.394845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:55.414 [2024-12-09 14:54:33.394866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:55.414 [2024-12-09 14:54:33.394877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:55.414 [2024-12-09 14:54:33.394893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:55.414 [2024-12-09 14:54:33.394899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:55.414 [2024-12-09 14:54:33.394917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:55.414 [2024-12-09 14:54:33.394927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.414 [2024-12-09 14:54:33.394943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:55.414 [2024-12-09 14:54:33.394952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:55.414 [2024-12-09 14:54:33.394961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:55.414 [2024-12-09 14:54:33.394969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:55.414 [2024-12-09 14:54:33.394978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:55.414 [2024-12-09 14:54:33.394985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:55.414 [2024-12-09 14:54:33.394996] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:55.414 [2024-12-09 14:54:33.395006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.414 [2024-12-09 14:54:33.395021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:55.414 [2024-12-09 14:54:33.395029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:55.414 [2024-12-09 14:54:33.395038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:55.414 [2024-12-09 14:54:33.395046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:55.414 [2024-12-09 14:54:33.395055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:55.414 [2024-12-09 14:54:33.395064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:55.414 [2024-12-09 14:54:33.395085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:55.414 [2024-12-09 14:54:33.395093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:55.414 [2024-12-09 14:54:33.395102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:55.414 [2024-12-09 14:54:33.395109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:55.414 [2024-12-09 14:54:33.395120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:55.414 [2024-12-09 14:54:33.395128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:55.414 [2024-12-09 14:54:33.395137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:55.414 [2024-12-09 14:54:33.395145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:55.414 [2024-12-09 14:54:33.395154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:55.414 [2024-12-09 14:54:33.395162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.414 [2024-12-09 14:54:33.395174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:55.414 [2024-12-09 14:54:33.395181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:55.414 [2024-12-09 14:54:33.395191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:55.414 [2024-12-09 14:54:33.395199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:55.414 [2024-12-09 14:54:33.395208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.414 [2024-12-09 14:54:33.395216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:55.414 [2024-12-09 14:54:33.395225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:21:55.414 [2024-12-09 14:54:33.395234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.414 [2024-12-09 14:54:33.426553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.414 [2024-12-09 14:54:33.426603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.414 [2024-12-09 14:54:33.426618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.255 ms 00:21:55.414 [2024-12-09 14:54:33.426628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.414 [2024-12-09 14:54:33.426762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.414 [2024-12-09 14:54:33.426773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:55.414 [2024-12-09 14:54:33.426785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:55.414 [2024-12-09 14:54:33.426793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.415 [2024-12-09 14:54:33.461321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.415 [2024-12-09 14:54:33.461368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.415 [2024-12-09 14:54:33.461382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.481 ms 00:21:55.415 [2024-12-09 14:54:33.461390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.415 [2024-12-09 14:54:33.461476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.415 [2024-12-09 14:54:33.461486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.415 [2024-12-09 14:54:33.461499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:55.415 [2024-12-09 14:54:33.461506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.415 [2024-12-09 14:54:33.462081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.415 [2024-12-09 14:54:33.462114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.415 [2024-12-09 14:54:33.462126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:21:55.415 [2024-12-09 14:54:33.462134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.415 [2024-12-09 14:54:33.462290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.415 [2024-12-09 14:54:33.462299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.415 [2024-12-09 14:54:33.462310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:55.415 [2024-12-09 14:54:33.462318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.415 [2024-12-09 14:54:33.480061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.415 [2024-12-09 14:54:33.480102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.415 [2024-12-09 14:54:33.480116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.718 ms 00:21:55.415 [2024-12-09 14:54:33.480123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.415 [2024-12-09 14:54:33.509548] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:55.415 [2024-12-09 14:54:33.509606] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:55.415 [2024-12-09 14:54:33.509627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.415 [2024-12-09 14:54:33.509639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:55.415 [2024-12-09 14:54:33.509654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.388 ms 00:21:55.415 [2024-12-09 14:54:33.509671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.676 [2024-12-09 14:54:33.535390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.676 [2024-12-09 14:54:33.535444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:55.676 [2024-12-09 14:54:33.535459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.604 ms 00:21:55.676 [2024-12-09 14:54:33.535467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.676 [2024-12-09 14:54:33.548576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.676 [2024-12-09 14:54:33.548620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:55.676 [2024-12-09 14:54:33.548637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.010 ms 00:21:55.676 [2024-12-09 14:54:33.548644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.676 [2024-12-09 14:54:33.561364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.676 [2024-12-09 14:54:33.561418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:55.676 [2024-12-09 14:54:33.561433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.634 ms 00:21:55.676 [2024-12-09 14:54:33.561440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.676 [2024-12-09 14:54:33.562140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.676 [2024-12-09 14:54:33.562172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:55.676 [2024-12-09 14:54:33.562185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:21:55.676 [2024-12-09 14:54:33.562193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.676 [2024-12-09 14:54:33.626822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.676 [2024-12-09 14:54:33.626894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:55.676 [2024-12-09 14:54:33.626913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.582 ms 00:21:55.676 [2024-12-09 14:54:33.626922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.676 [2024-12-09 14:54:33.638119] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:55.676 [2024-12-09 14:54:33.656725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.676 [2024-12-09 14:54:33.656783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:55.676 [2024-12-09 14:54:33.656812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.691 ms 00:21:55.676 [2024-12-09 14:54:33.656824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.676 [2024-12-09 14:54:33.656911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.676 [2024-12-09 14:54:33.656924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:55.676 [2024-12-09 14:54:33.656934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:55.676 [2024-12-09 14:54:33.656945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.677 [2024-12-09 14:54:33.656999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.677 [2024-12-09 14:54:33.657011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:55.677 [2024-12-09 14:54:33.657020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:55.677 [2024-12-09 14:54:33.657032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.677 [2024-12-09 14:54:33.657057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.677 [2024-12-09 14:54:33.657067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:55.677 [2024-12-09 14:54:33.657075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:55.677 [2024-12-09 14:54:33.657087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.677 [2024-12-09 14:54:33.657122] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:55.677 [2024-12-09 14:54:33.657136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.677 [2024-12-09 14:54:33.657148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:55.677 [2024-12-09 14:54:33.657158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:55.677 [2024-12-09 14:54:33.657165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.677 [2024-12-09 14:54:33.683338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.677 [2024-12-09 14:54:33.683387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:55.677 [2024-12-09 14:54:33.683403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.140 ms 00:21:55.677 [2024-12-09 14:54:33.683413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.677 [2024-12-09 14:54:33.683540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.677 [2024-12-09 14:54:33.683552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:55.677 [2024-12-09 14:54:33.683564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:55.677 [2024-12-09 14:54:33.683575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.677 [2024-12-09 14:54:33.684625] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.677 [2024-12-09 14:54:33.687942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 323.096 ms, result 0 00:21:55.677 [2024-12-09 14:54:33.690068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.677 Some configs were skipped because the RPC state that can call them passed over. 00:21:55.677 14:54:33 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:55.938 [2024-12-09 14:54:33.938719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.938 [2024-12-09 14:54:33.938788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:55.938 [2024-12-09 14:54:33.938816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:21:55.938 [2024-12-09 14:54:33.938828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.938 [2024-12-09 14:54:33.938875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.387 ms, result 0 00:21:55.938 true 00:21:55.938 14:54:33 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:56.199 [2024-12-09 14:54:34.154207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.199 [2024-12-09 14:54:34.154259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:56.200 [2024-12-09 14:54:34.154272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.476 ms 00:21:56.200 [2024-12-09 14:54:34.154280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.200 [2024-12-09 14:54:34.154316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.588 ms, result 0 00:21:56.200 true 00:21:56.200 14:54:34 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78430 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78430 ']' 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78430 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78430 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.200 killing process with pid 78430 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78430' 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78430 00:21:56.200 14:54:34 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78430 00:21:56.770 [2024-12-09 14:54:34.885069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.770 [2024-12-09 14:54:34.885118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.770 [2024-12-09 14:54:34.885128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:56.770 [2024-12-09 14:54:34.885136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.770 [2024-12-09 14:54:34.885154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:56.770 [2024-12-09 14:54:34.887303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.770 [2024-12-09 14:54:34.887329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.770 [2024-12-09 14:54:34.887340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:21:56.770 [2024-12-09 14:54:34.887346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.770 [2024-12-09 14:54:34.887569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.770 [2024-12-09 14:54:34.887581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.770 [2024-12-09 14:54:34.887589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:21:56.770 [2024-12-09 14:54:34.887595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.770 [2024-12-09 14:54:34.890618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.770 [2024-12-09 14:54:34.890641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.770 [2024-12-09 14:54:34.890652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.007 ms 00:21:56.770 [2024-12-09 14:54:34.890658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.030 [2024-12-09 14:54:34.895901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.030 [2024-12-09 14:54:34.895926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:57.030 [2024-12-09 14:54:34.895936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.215 ms 00:21:57.030 [2024-12-09 14:54:34.895942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.030 [2024-12-09 14:54:34.903280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.030 [2024-12-09 14:54:34.903310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:57.030 [2024-12-09 14:54:34.903320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.297 ms 00:21:57.030 [2024-12-09 14:54:34.903326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.030 [2024-12-09 14:54:34.909977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.030 [2024-12-09 14:54:34.910006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:57.030 [2024-12-09 14:54:34.910016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.621 ms 00:21:57.030 [2024-12-09 14:54:34.910022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.030 [2024-12-09 14:54:34.910129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.030 [2024-12-09 14:54:34.910137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:57.030 [2024-12-09 14:54:34.910146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:57.030 [2024-12-09 14:54:34.910152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.030 [2024-12-09 14:54:34.918058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.030 [2024-12-09 14:54:34.918083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:57.030 [2024-12-09 14:54:34.918091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.890 ms 00:21:57.030 [2024-12-09 14:54:34.918097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.030 [2024-12-09 14:54:34.925175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.031 [2024-12-09 14:54:34.925202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:57.031 [2024-12-09 14:54:34.925214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:21:57.031 [2024-12-09 14:54:34.925219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.031 [2024-12-09 14:54:34.931862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.031 [2024-12-09 14:54:34.931886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:57.031 [2024-12-09 14:54:34.931894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.608 ms 00:21:57.031 [2024-12-09 14:54:34.931900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.031 [2024-12-09 14:54:34.939112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.031 [2024-12-09 14:54:34.939137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:57.031 [2024-12-09 14:54:34.939145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.163 ms 00:21:57.031 [2024-12-09 14:54:34.939151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.031 [2024-12-09 14:54:34.939178] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:57.031 [2024-12-09 14:54:34.939189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:57.031 [2024-12-09 14:54:34.939701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:57.032 [2024-12-09 14:54:34.939847] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:57.032 [2024-12-09 14:54:34.939857] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cea04ff-c544-4eb1-8911-42d62e850592 00:21:57.032 [2024-12-09 14:54:34.939866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:57.032 [2024-12-09 14:54:34.939873] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:57.032 [2024-12-09 14:54:34.939878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:57.032 [2024-12-09 14:54:34.939885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:57.032 [2024-12-09 14:54:34.939890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:57.032 [2024-12-09 14:54:34.939897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:57.032 [2024-12-09 14:54:34.939903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:57.032 [2024-12-09 14:54:34.939909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:57.032 [2024-12-09 14:54:34.939914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:57.032 [2024-12-09 14:54:34.939921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.032 [2024-12-09 14:54:34.939927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:57.032 [2024-12-09 14:54:34.939934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:21:57.032 [2024-12-09 14:54:34.939939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:34.949370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.032 [2024-12-09 14:54:34.949395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:57.032 [2024-12-09 14:54:34.949406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.413 ms 00:21:57.032 [2024-12-09 14:54:34.949412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:34.950589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.032 [2024-12-09 14:54:34.950614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:57.032 [2024-12-09 14:54:34.950624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:21:57.032 [2024-12-09 14:54:34.950629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:34.985336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:34.985362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.032 [2024-12-09 14:54:34.985372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:34.985378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:34.985445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:34.985452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.032 [2024-12-09 14:54:34.985461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:34.985467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:34.985499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:34.985506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.032 [2024-12-09 14:54:34.985515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:34.985521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:34.985535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:34.985540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.032 [2024-12-09 14:54:34.985548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:34.985554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.044170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.044201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.032 [2024-12-09 14:54:35.044212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.044217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.092485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.032 [2024-12-09 14:54:35.092495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.092503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.092565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.032 [2024-12-09 14:54:35.092574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.092579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.092609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.032 [2024-12-09 14:54:35.092617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.092623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.092699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.032 [2024-12-09 14:54:35.092706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.092712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.092744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:57.032 [2024-12-09 14:54:35.092751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.092757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.092795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.032 [2024-12-09 14:54:35.092819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.092825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.032 [2024-12-09 14:54:35.092868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.032 [2024-12-09 14:54:35.092875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.032 [2024-12-09 14:54:35.092881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.032 [2024-12-09 14:54:35.092983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 207.896 ms, result 0 00:21:57.600 14:54:35 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:57.600 [2024-12-09 14:54:35.685052] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:21:57.600 [2024-12-09 14:54:35.685174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78479 ] 00:21:57.860 [2024-12-09 14:54:35.839633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.860 [2024-12-09 14:54:35.923771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.118 [2024-12-09 14:54:36.132385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.118 [2024-12-09 14:54:36.132438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.378 [2024-12-09 14:54:36.283953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.378 [2024-12-09 14:54:36.283991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:58.378 [2024-12-09 14:54:36.284002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:58.378 [2024-12-09 14:54:36.284008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.378 [2024-12-09 14:54:36.286047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.378 [2024-12-09 14:54:36.286076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.378 [2024-12-09 14:54:36.286083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:21:58.378 [2024-12-09 14:54:36.286089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.378 [2024-12-09 14:54:36.286143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:58.378 [2024-12-09 14:54:36.286644] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:58.378 [2024-12-09 14:54:36.286664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.378 [2024-12-09 14:54:36.286670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.378 [2024-12-09 14:54:36.286677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:21:58.378 [2024-12-09 14:54:36.286682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.378 [2024-12-09 14:54:36.287632] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:58.379 [2024-12-09 14:54:36.296961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.296989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:58.379 [2024-12-09 14:54:36.296997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.330 ms 00:21:58.379 [2024-12-09 14:54:36.297004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.297072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.297082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:58.379 [2024-12-09 14:54:36.297088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:58.379 [2024-12-09 14:54:36.297094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.301288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.301313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.379 [2024-12-09 14:54:36.301321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.167 ms 00:21:58.379 [2024-12-09 14:54:36.301326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.301394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.301402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.379 [2024-12-09 14:54:36.301408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:58.379 [2024-12-09 14:54:36.301414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.301432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.301438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:58.379 [2024-12-09 14:54:36.301444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.379 [2024-12-09 14:54:36.301449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.301466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:58.379 [2024-12-09 14:54:36.304183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.304210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.379 [2024-12-09 14:54:36.304217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:21:58.379 [2024-12-09 14:54:36.304223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.304255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.304267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:58.379 [2024-12-09 14:54:36.304273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:58.379 [2024-12-09 14:54:36.304278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.304295] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:58.379 [2024-12-09 14:54:36.304313] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:58.379 [2024-12-09 14:54:36.304347] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:58.379 [2024-12-09 14:54:36.304358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:58.379 [2024-12-09 14:54:36.304443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:58.379 [2024-12-09 14:54:36.304451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:58.379 [2024-12-09 14:54:36.304459] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:58.379 [2024-12-09 14:54:36.304471] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304480] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304486] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:58.379 [2024-12-09 14:54:36.304492] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:58.379 [2024-12-09 14:54:36.304497] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:58.379 [2024-12-09 14:54:36.304503] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:58.379 [2024-12-09 14:54:36.304508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.304514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:58.379 [2024-12-09 14:54:36.304520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:21:58.379 [2024-12-09 14:54:36.304525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.304591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.379 [2024-12-09 14:54:36.304599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:58.379 [2024-12-09 14:54:36.304604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:58.379 [2024-12-09 14:54:36.304613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.379 [2024-12-09 14:54:36.304698] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:58.379 [2024-12-09 14:54:36.304706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:58.379 [2024-12-09 14:54:36.304712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:58.379 [2024-12-09 14:54:36.304728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:58.379 [2024-12-09 14:54:36.304743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.379 [2024-12-09 14:54:36.304754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:58.379 [2024-12-09 14:54:36.304763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:58.379 [2024-12-09 14:54:36.304768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.379 [2024-12-09 14:54:36.304773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:58.379 [2024-12-09 14:54:36.304779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:58.379 [2024-12-09 14:54:36.304784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:58.379 [2024-12-09 14:54:36.304794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:58.379 [2024-12-09 14:54:36.304820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:58.379 [2024-12-09 14:54:36.304835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:58.379 [2024-12-09 14:54:36.304850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:58.379 [2024-12-09 14:54:36.304865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:58.379 [2024-12-09 14:54:36.304881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.379 [2024-12-09 14:54:36.304890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:58.379 [2024-12-09 14:54:36.304895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:58.379 [2024-12-09 14:54:36.304900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.379 [2024-12-09 14:54:36.304905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:58.379 [2024-12-09 14:54:36.304910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:58.379 [2024-12-09 14:54:36.304915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:58.379 [2024-12-09 14:54:36.304924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:58.379 [2024-12-09 14:54:36.304930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304934] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:58.379 [2024-12-09 14:54:36.304940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:58.379 [2024-12-09 14:54:36.304952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.379 [2024-12-09 14:54:36.304964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:58.379 [2024-12-09 14:54:36.304969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:58.379 [2024-12-09 14:54:36.304974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:58.379 [2024-12-09 14:54:36.304979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:58.379 [2024-12-09 14:54:36.304984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:58.379 [2024-12-09 14:54:36.304989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:58.379 [2024-12-09 14:54:36.304995] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:58.379 [2024-12-09 14:54:36.305002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.379 [2024-12-09 14:54:36.305008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:58.380 [2024-12-09 14:54:36.305013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:58.380 [2024-12-09 14:54:36.305019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:58.380 [2024-12-09 14:54:36.305024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:58.380 [2024-12-09 14:54:36.305029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:58.380 [2024-12-09 14:54:36.305034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:58.380 [2024-12-09 14:54:36.305039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:58.380 [2024-12-09 14:54:36.305044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:58.380 [2024-12-09 14:54:36.305050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:58.380 [2024-12-09 14:54:36.305056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:58.380 [2024-12-09 14:54:36.305061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:58.380 [2024-12-09 14:54:36.305066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:58.380 [2024-12-09 14:54:36.305071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:58.380 [2024-12-09 14:54:36.305077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:58.380 [2024-12-09 14:54:36.305082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:58.380 [2024-12-09 14:54:36.305088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.380 [2024-12-09 14:54:36.305094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:58.380 [2024-12-09 14:54:36.305099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:58.380 [2024-12-09 14:54:36.305105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:58.380 [2024-12-09 14:54:36.305111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:58.380 [2024-12-09 14:54:36.305116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.305124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:58.380 [2024-12-09 14:54:36.305129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:21:58.380 [2024-12-09 14:54:36.305135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.325626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.325660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.380 [2024-12-09 14:54:36.325667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.451 ms 00:21:58.380 [2024-12-09 14:54:36.325673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.325764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.325771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.380 [2024-12-09 14:54:36.325778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:58.380 [2024-12-09 14:54:36.325784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.366334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.366365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.380 [2024-12-09 14:54:36.366376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.533 ms 00:21:58.380 [2024-12-09 14:54:36.366383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.366440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.366448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.380 [2024-12-09 14:54:36.366455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:58.380 [2024-12-09 14:54:36.366462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.366753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.366772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.380 [2024-12-09 14:54:36.366779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:58.380 [2024-12-09 14:54:36.366788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.366914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.366927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.380 [2024-12-09 14:54:36.366933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:58.380 [2024-12-09 14:54:36.366939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.377480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.377507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.380 [2024-12-09 14:54:36.377514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.525 ms 00:21:58.380 [2024-12-09 14:54:36.377520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.387213] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:58.380 [2024-12-09 14:54:36.387241] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:58.380 [2024-12-09 14:54:36.387250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.387256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:58.380 [2024-12-09 14:54:36.387263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.648 ms 00:21:58.380 [2024-12-09 14:54:36.387269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.405607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.405634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:58.380 [2024-12-09 14:54:36.405643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.294 ms 00:21:58.380 [2024-12-09 14:54:36.405650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.414487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.414514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:58.380 [2024-12-09 14:54:36.414521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.785 ms 00:21:58.380 [2024-12-09 14:54:36.414526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.423240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.423265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:58.380 [2024-12-09 14:54:36.423272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.675 ms 00:21:58.380 [2024-12-09 14:54:36.423277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.423728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.423747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.380 [2024-12-09 14:54:36.423754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:21:58.380 [2024-12-09 14:54:36.423759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.467426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.467460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:58.380 [2024-12-09 14:54:36.467471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.650 ms 00:21:58.380 [2024-12-09 14:54:36.467477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.475160] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:58.380 [2024-12-09 14:54:36.486539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.486566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:58.380 [2024-12-09 14:54:36.486575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.996 ms 00:21:58.380 [2024-12-09 14:54:36.486584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.486653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.486661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:58.380 [2024-12-09 14:54:36.486668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:58.380 [2024-12-09 14:54:36.486674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.486707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.486714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.380 [2024-12-09 14:54:36.486721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:58.380 [2024-12-09 14:54:36.486729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.486753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.486760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.380 [2024-12-09 14:54:36.486766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:58.380 [2024-12-09 14:54:36.486771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.380 [2024-12-09 14:54:36.486795] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:58.380 [2024-12-09 14:54:36.486812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.380 [2024-12-09 14:54:36.486819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:58.380 [2024-12-09 14:54:36.486825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:58.380 [2024-12-09 14:54:36.486830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.641 [2024-12-09 14:54:36.504681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.641 [2024-12-09 14:54:36.504709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.641 [2024-12-09 14:54:36.504718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.834 ms 00:21:58.641 [2024-12-09 14:54:36.504724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.641 [2024-12-09 14:54:36.504790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.641 [2024-12-09 14:54:36.504798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.641 [2024-12-09 14:54:36.504816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:58.641 [2024-12-09 14:54:36.504822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.641 [2024-12-09 14:54:36.505718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.642 [2024-12-09 14:54:36.507957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.551 ms, result 0 00:21:58.642 [2024-12-09 14:54:36.508516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.642 [2024-12-09 14:54:36.523249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:59.587  [2024-12-09T14:54:38.654Z] Copying: 22/256 [MB] (22 MBps) [2024-12-09T14:54:39.599Z] Copying: 37/256 [MB] (14 MBps) [2024-12-09T14:54:40.986Z] Copying: 57/256 [MB] (20 MBps) [2024-12-09T14:54:41.927Z] Copying: 74/256 [MB] (17 MBps) [2024-12-09T14:54:42.870Z] Copying: 92/256 [MB] (17 MBps) [2024-12-09T14:54:43.810Z] Copying: 110/256 [MB] (18 MBps) [2024-12-09T14:54:44.752Z] Copying: 127/256 [MB] (16 MBps) [2024-12-09T14:54:45.696Z] Copying: 150/256 [MB] (22 MBps) [2024-12-09T14:54:46.636Z] Copying: 172/256 [MB] (22 MBps) [2024-12-09T14:54:47.579Z] Copying: 200/256 [MB] (28 MBps) [2024-12-09T14:54:48.968Z] Copying: 217/256 [MB] (16 MBps) [2024-12-09T14:54:49.541Z] Copying: 235/256 [MB] (18 MBps) [2024-12-09T14:54:50.116Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-09 14:54:49.864508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.994 [2024-12-09 14:54:49.878405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.878468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.994 [2024-12-09 14:54:49.878495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:11.994 [2024-12-09 14:54:49.878507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.878539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:11.994 [2024-12-09 14:54:49.881604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.881649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.994 [2024-12-09 14:54:49.881661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 00:22:11.994 [2024-12-09 14:54:49.881670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.881980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.881993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.994 [2024-12-09 14:54:49.882002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:11.994 [2024-12-09 14:54:49.882012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.885729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.885755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.994 [2024-12-09 14:54:49.885766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:22:11.994 [2024-12-09 14:54:49.885774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.892794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.892864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:11.994 [2024-12-09 14:54:49.892876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.999 ms 00:22:11.994 [2024-12-09 14:54:49.892884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.919236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.919291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:11.994 [2024-12-09 14:54:49.919304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.274 ms 00:22:11.994 [2024-12-09 14:54:49.919312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.935906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.935959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:11.994 [2024-12-09 14:54:49.935981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.523 ms 00:22:11.994 [2024-12-09 14:54:49.935990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.936160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.936173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:11.994 [2024-12-09 14:54:49.936193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:11.994 [2024-12-09 14:54:49.936200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.962581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.962629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:11.994 [2024-12-09 14:54:49.962642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.362 ms 00:22:11.994 [2024-12-09 14:54:49.962649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:49.988225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:49.988275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:11.994 [2024-12-09 14:54:49.988287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.509 ms 00:22:11.994 [2024-12-09 14:54:49.988295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:50.013201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:50.013253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:11.994 [2024-12-09 14:54:50.013264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.854 ms 00:22:11.994 [2024-12-09 14:54:50.013272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:50.038794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.994 [2024-12-09 14:54:50.038857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:11.994 [2024-12-09 14:54:50.038885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.439 ms 00:22:11.994 [2024-12-09 14:54:50.038894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.994 [2024-12-09 14:54:50.038961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:11.994 [2024-12-09 14:54:50.038978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.038989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.038998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:11.994 [2024-12-09 14:54:50.039149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:11.995 [2024-12-09 14:54:50.039787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:11.995 [2024-12-09 14:54:50.039796] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cea04ff-c544-4eb1-8911-42d62e850592 00:22:11.995 [2024-12-09 14:54:50.039830] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:11.995 [2024-12-09 14:54:50.039839] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:11.995 [2024-12-09 14:54:50.039848] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:11.995 [2024-12-09 14:54:50.039857] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:11.995 [2024-12-09 14:54:50.039864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:11.995 [2024-12-09 14:54:50.039874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:11.995 [2024-12-09 14:54:50.039891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:11.995 [2024-12-09 14:54:50.039898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:11.995 [2024-12-09 14:54:50.039905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:11.995 [2024-12-09 14:54:50.039914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.995 [2024-12-09 14:54:50.039922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:11.995 [2024-12-09 14:54:50.039933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:22:11.995 [2024-12-09 14:54:50.039941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.995 [2024-12-09 14:54:50.053540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.996 [2024-12-09 14:54:50.053592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:11.996 [2024-12-09 14:54:50.053604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.578 ms 00:22:11.996 [2024-12-09 14:54:50.053613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.996 [2024-12-09 14:54:50.054051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.996 [2024-12-09 14:54:50.054073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:11.996 [2024-12-09 14:54:50.054084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:22:11.996 [2024-12-09 14:54:50.054092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.996 [2024-12-09 14:54:50.093558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.996 [2024-12-09 14:54:50.093617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.996 [2024-12-09 14:54:50.093630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.996 [2024-12-09 14:54:50.093645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.996 [2024-12-09 14:54:50.093747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.996 [2024-12-09 14:54:50.093757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.996 [2024-12-09 14:54:50.093766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.996 [2024-12-09 14:54:50.093774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.996 [2024-12-09 14:54:50.093846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.996 [2024-12-09 14:54:50.093858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.996 [2024-12-09 14:54:50.093866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.996 [2024-12-09 14:54:50.093874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.996 [2024-12-09 14:54:50.093897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:11.996 [2024-12-09 14:54:50.093905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.996 [2024-12-09 14:54:50.093913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:11.996 [2024-12-09 14:54:50.093921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.257 [2024-12-09 14:54:50.180949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.257 [2024-12-09 14:54:50.181008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.257 [2024-12-09 14:54:50.181023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.257 [2024-12-09 14:54:50.181032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.257 [2024-12-09 14:54:50.251553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.257 [2024-12-09 14:54:50.251610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.257 [2024-12-09 14:54:50.251623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.257 [2024-12-09 14:54:50.251632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.257 [2024-12-09 14:54:50.251716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.257 [2024-12-09 14:54:50.251727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.257 [2024-12-09 14:54:50.251736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.257 [2024-12-09 14:54:50.251745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.257 [2024-12-09 14:54:50.251779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.257 [2024-12-09 14:54:50.251792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.257 [2024-12-09 14:54:50.251830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.257 [2024-12-09 14:54:50.251839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.257 [2024-12-09 14:54:50.251939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.257 [2024-12-09 14:54:50.251950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.257 [2024-12-09 14:54:50.251958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.258 [2024-12-09 14:54:50.251967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.258 [2024-12-09 14:54:50.252001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.258 [2024-12-09 14:54:50.252024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:12.258 [2024-12-09 14:54:50.252037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.258 [2024-12-09 14:54:50.252046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.258 [2024-12-09 14:54:50.252093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.258 [2024-12-09 14:54:50.252111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.258 [2024-12-09 14:54:50.252119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.258 [2024-12-09 14:54:50.252128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.258 [2024-12-09 14:54:50.252178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.258 [2024-12-09 14:54:50.252198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.258 [2024-12-09 14:54:50.252207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.258 [2024-12-09 14:54:50.252216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.258 [2024-12-09 14:54:50.252377] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.974 ms, result 0 00:22:13.203 00:22:13.203 00:22:13.203 14:54:51 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:13.776 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:13.776 14:54:51 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:13.776 14:54:51 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:13.776 14:54:51 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:13.776 14:54:51 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:13.776 14:54:51 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:13.776 14:54:51 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:13.776 14:54:51 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78430 00:22:13.776 14:54:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78430 ']' 00:22:13.776 14:54:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78430 00:22:13.776 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78430) - No such process 00:22:13.776 Process with pid 78430 is not found 00:22:13.776 14:54:51 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78430 is not found' 00:22:13.776 00:22:13.776 real 1m15.943s 00:22:13.776 user 1m32.091s 00:22:13.776 sys 0m14.027s 00:22:13.776 ************************************ 00:22:13.776 END TEST ftl_trim 00:22:13.776 ************************************ 00:22:13.776 14:54:51 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.776 14:54:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:13.776 14:54:51 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:13.776 14:54:51 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:13.776 14:54:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.777 14:54:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:13.777 ************************************ 00:22:13.777 START TEST ftl_restore 00:22:13.777 ************************************ 00:22:13.777 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:13.777 * Looking for test storage... 00:22:13.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:13.777 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:13.777 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:13.777 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:22:14.053 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.053 14:54:51 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:14.053 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.053 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.053 --rc genhtml_branch_coverage=1 00:22:14.053 --rc genhtml_function_coverage=1 00:22:14.053 --rc genhtml_legend=1 00:22:14.053 --rc geninfo_all_blocks=1 00:22:14.053 --rc geninfo_unexecuted_blocks=1 00:22:14.053 00:22:14.053 ' 00:22:14.053 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.053 --rc genhtml_branch_coverage=1 00:22:14.053 --rc genhtml_function_coverage=1 00:22:14.053 --rc genhtml_legend=1 00:22:14.053 --rc geninfo_all_blocks=1 00:22:14.053 --rc geninfo_unexecuted_blocks=1 00:22:14.053 00:22:14.053 ' 00:22:14.053 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.053 --rc genhtml_branch_coverage=1 00:22:14.053 --rc genhtml_function_coverage=1 00:22:14.053 --rc genhtml_legend=1 00:22:14.053 --rc geninfo_all_blocks=1 00:22:14.053 --rc geninfo_unexecuted_blocks=1 00:22:14.053 00:22:14.053 ' 00:22:14.053 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:14.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.053 --rc genhtml_branch_coverage=1 00:22:14.053 --rc genhtml_function_coverage=1 00:22:14.053 --rc genhtml_legend=1 00:22:14.053 --rc geninfo_all_blocks=1 00:22:14.053 --rc geninfo_unexecuted_blocks=1 00:22:14.053 00:22:14.053 ' 00:22:14.053 14:54:51 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:14.053 14:54:51 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.RNqNdCEv4r 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78711 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78711 00:22:14.054 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78711 ']' 00:22:14.054 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.054 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.054 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.054 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.054 14:54:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:14.054 14:54:51 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:14.054 [2024-12-09 14:54:52.044831] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:22:14.054 [2024-12-09 14:54:52.044987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78711 ] 00:22:14.362 [2024-12-09 14:54:52.210655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.362 [2024-12-09 14:54:52.333173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.935 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.935 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:14.935 14:54:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:14.935 14:54:53 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:14.935 14:54:53 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:14.935 14:54:53 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:14.935 14:54:53 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:14.935 14:54:53 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:15.508 { 00:22:15.508 "name": "nvme0n1", 00:22:15.508 "aliases": [ 00:22:15.508 "b3d3dcab-f2a1-4c1d-834a-c657dc01476b" 00:22:15.508 ], 00:22:15.508 "product_name": "NVMe disk", 00:22:15.508 "block_size": 4096, 00:22:15.508 "num_blocks": 1310720, 00:22:15.508 "uuid": "b3d3dcab-f2a1-4c1d-834a-c657dc01476b", 00:22:15.508 "numa_id": -1, 00:22:15.508 "assigned_rate_limits": { 00:22:15.508 "rw_ios_per_sec": 0, 00:22:15.508 "rw_mbytes_per_sec": 0, 00:22:15.508 "r_mbytes_per_sec": 0, 00:22:15.508 "w_mbytes_per_sec": 0 00:22:15.508 }, 00:22:15.508 "claimed": true, 00:22:15.508 "claim_type": "read_many_write_one", 00:22:15.508 "zoned": false, 00:22:15.508 "supported_io_types": { 00:22:15.508 "read": true, 00:22:15.508 "write": true, 00:22:15.508 "unmap": true, 00:22:15.508 "flush": true, 00:22:15.508 "reset": true, 00:22:15.508 "nvme_admin": true, 00:22:15.508 "nvme_io": true, 00:22:15.508 "nvme_io_md": false, 00:22:15.508 "write_zeroes": true, 00:22:15.508 "zcopy": false, 00:22:15.508 "get_zone_info": false, 00:22:15.508 "zone_management": false, 00:22:15.508 "zone_append": false, 00:22:15.508 "compare": true, 00:22:15.508 "compare_and_write": false, 00:22:15.508 "abort": true, 00:22:15.508 "seek_hole": false, 00:22:15.508 "seek_data": false, 00:22:15.508 "copy": true, 00:22:15.508 "nvme_iov_md": false 00:22:15.508 }, 00:22:15.508 "driver_specific": { 00:22:15.508 "nvme": [ 00:22:15.508 { 00:22:15.508 "pci_address": "0000:00:11.0", 00:22:15.508 "trid": { 00:22:15.508 "trtype": "PCIe", 00:22:15.508 "traddr": "0000:00:11.0" 00:22:15.508 }, 00:22:15.508 "ctrlr_data": { 00:22:15.508 "cntlid": 0, 00:22:15.508 "vendor_id": "0x1b36", 00:22:15.508 "model_number": "QEMU NVMe Ctrl", 00:22:15.508 "serial_number": "12341", 00:22:15.508 "firmware_revision": "8.0.0", 00:22:15.508 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:15.508 "oacs": { 00:22:15.508 "security": 0, 00:22:15.508 "format": 1, 00:22:15.508 "firmware": 0, 00:22:15.508 "ns_manage": 1 00:22:15.508 }, 00:22:15.508 "multi_ctrlr": false, 00:22:15.508 "ana_reporting": false 00:22:15.508 }, 00:22:15.508 "vs": { 00:22:15.508 "nvme_version": "1.4" 00:22:15.508 }, 00:22:15.508 "ns_data": { 00:22:15.508 "id": 1, 00:22:15.508 "can_share": false 00:22:15.508 } 00:22:15.508 } 00:22:15.508 ], 00:22:15.508 "mp_policy": "active_passive" 00:22:15.508 } 00:22:15.508 } 00:22:15.508 ]' 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:15.508 14:54:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:15.508 14:54:53 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:15.770 14:54:53 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=f2d6774e-5e05-47ab-8b12-11edaf6a6744 00:22:15.770 14:54:53 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:15.770 14:54:53 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2d6774e-5e05-47ab-8b12-11edaf6a6744 00:22:16.032 14:54:54 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:16.293 14:54:54 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5314e2a7-e0b8-4b94-af46-e25d18eaa649 00:22:16.293 14:54:54 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5314e2a7-e0b8-4b94-af46-e25d18eaa649 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:16.555 14:54:54 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:16.555 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:16.555 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:16.555 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:16.555 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:16.555 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:16.816 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:16.816 { 00:22:16.816 "name": "057d9cc6-df31-4831-82e9-693c4b526c9a", 00:22:16.816 "aliases": [ 00:22:16.816 "lvs/nvme0n1p0" 00:22:16.816 ], 00:22:16.816 "product_name": "Logical Volume", 00:22:16.816 "block_size": 4096, 00:22:16.816 "num_blocks": 26476544, 00:22:16.816 "uuid": "057d9cc6-df31-4831-82e9-693c4b526c9a", 00:22:16.816 "assigned_rate_limits": { 00:22:16.816 "rw_ios_per_sec": 0, 00:22:16.816 "rw_mbytes_per_sec": 0, 00:22:16.816 "r_mbytes_per_sec": 0, 00:22:16.816 "w_mbytes_per_sec": 0 00:22:16.816 }, 00:22:16.816 "claimed": false, 00:22:16.816 "zoned": false, 00:22:16.816 "supported_io_types": { 00:22:16.816 "read": true, 00:22:16.816 "write": true, 00:22:16.816 "unmap": true, 00:22:16.816 "flush": false, 00:22:16.816 "reset": true, 00:22:16.816 "nvme_admin": false, 00:22:16.816 "nvme_io": false, 00:22:16.816 "nvme_io_md": false, 00:22:16.816 "write_zeroes": true, 00:22:16.816 "zcopy": false, 00:22:16.816 "get_zone_info": false, 00:22:16.816 "zone_management": false, 00:22:16.816 "zone_append": false, 00:22:16.816 "compare": false, 00:22:16.816 "compare_and_write": false, 00:22:16.816 "abort": false, 00:22:16.816 "seek_hole": true, 00:22:16.816 "seek_data": true, 00:22:16.816 "copy": false, 00:22:16.816 "nvme_iov_md": false 00:22:16.816 }, 00:22:16.816 "driver_specific": { 00:22:16.816 "lvol": { 00:22:16.816 "lvol_store_uuid": "5314e2a7-e0b8-4b94-af46-e25d18eaa649", 00:22:16.816 "base_bdev": "nvme0n1", 00:22:16.816 "thin_provision": true, 00:22:16.816 "num_allocated_clusters": 0, 00:22:16.816 "snapshot": false, 00:22:16.816 "clone": false, 00:22:16.816 "esnap_clone": false 00:22:16.816 } 00:22:16.816 } 00:22:16.816 } 00:22:16.816 ]' 00:22:16.816 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:16.816 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:16.816 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:16.816 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:16.816 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:16.817 14:54:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:16.817 14:54:54 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:16.817 14:54:54 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:16.817 14:54:54 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:17.078 14:54:55 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:17.078 14:54:55 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:17.078 14:54:55 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:17.078 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:17.078 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:17.078 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:17.078 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:17.078 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:17.339 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:17.339 { 00:22:17.339 "name": "057d9cc6-df31-4831-82e9-693c4b526c9a", 00:22:17.339 "aliases": [ 00:22:17.339 "lvs/nvme0n1p0" 00:22:17.339 ], 00:22:17.339 "product_name": "Logical Volume", 00:22:17.339 "block_size": 4096, 00:22:17.339 "num_blocks": 26476544, 00:22:17.339 "uuid": "057d9cc6-df31-4831-82e9-693c4b526c9a", 00:22:17.339 "assigned_rate_limits": { 00:22:17.339 "rw_ios_per_sec": 0, 00:22:17.339 "rw_mbytes_per_sec": 0, 00:22:17.339 "r_mbytes_per_sec": 0, 00:22:17.339 "w_mbytes_per_sec": 0 00:22:17.339 }, 00:22:17.339 "claimed": false, 00:22:17.339 "zoned": false, 00:22:17.339 "supported_io_types": { 00:22:17.339 "read": true, 00:22:17.339 "write": true, 00:22:17.339 "unmap": true, 00:22:17.339 "flush": false, 00:22:17.339 "reset": true, 00:22:17.339 "nvme_admin": false, 00:22:17.339 "nvme_io": false, 00:22:17.339 "nvme_io_md": false, 00:22:17.339 "write_zeroes": true, 00:22:17.339 "zcopy": false, 00:22:17.339 "get_zone_info": false, 00:22:17.339 "zone_management": false, 00:22:17.339 "zone_append": false, 00:22:17.339 "compare": false, 00:22:17.339 "compare_and_write": false, 00:22:17.339 "abort": false, 00:22:17.339 "seek_hole": true, 00:22:17.339 "seek_data": true, 00:22:17.339 "copy": false, 00:22:17.339 "nvme_iov_md": false 00:22:17.339 }, 00:22:17.339 "driver_specific": { 00:22:17.339 "lvol": { 00:22:17.339 "lvol_store_uuid": "5314e2a7-e0b8-4b94-af46-e25d18eaa649", 00:22:17.339 "base_bdev": "nvme0n1", 00:22:17.339 "thin_provision": true, 00:22:17.339 "num_allocated_clusters": 0, 00:22:17.339 "snapshot": false, 00:22:17.339 "clone": false, 00:22:17.340 "esnap_clone": false 00:22:17.340 } 00:22:17.340 } 00:22:17.340 } 00:22:17.340 ]' 00:22:17.340 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:17.340 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:17.340 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:17.340 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:17.340 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:17.340 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:17.340 14:54:55 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:17.340 14:54:55 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:17.601 14:54:55 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:17.601 14:54:55 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:17.601 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:17.601 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:17.601 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:17.601 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:17.601 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 057d9cc6-df31-4831-82e9-693c4b526c9a 00:22:17.861 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:17.861 { 00:22:17.861 "name": "057d9cc6-df31-4831-82e9-693c4b526c9a", 00:22:17.861 "aliases": [ 00:22:17.861 "lvs/nvme0n1p0" 00:22:17.861 ], 00:22:17.861 "product_name": "Logical Volume", 00:22:17.861 "block_size": 4096, 00:22:17.861 "num_blocks": 26476544, 00:22:17.861 "uuid": "057d9cc6-df31-4831-82e9-693c4b526c9a", 00:22:17.861 "assigned_rate_limits": { 00:22:17.861 "rw_ios_per_sec": 0, 00:22:17.861 "rw_mbytes_per_sec": 0, 00:22:17.861 "r_mbytes_per_sec": 0, 00:22:17.861 "w_mbytes_per_sec": 0 00:22:17.861 }, 00:22:17.861 "claimed": false, 00:22:17.861 "zoned": false, 00:22:17.861 "supported_io_types": { 00:22:17.861 "read": true, 00:22:17.861 "write": true, 00:22:17.861 "unmap": true, 00:22:17.861 "flush": false, 00:22:17.861 "reset": true, 00:22:17.861 "nvme_admin": false, 00:22:17.861 "nvme_io": false, 00:22:17.861 "nvme_io_md": false, 00:22:17.861 "write_zeroes": true, 00:22:17.861 "zcopy": false, 00:22:17.861 "get_zone_info": false, 00:22:17.861 "zone_management": false, 00:22:17.861 "zone_append": false, 00:22:17.861 "compare": false, 00:22:17.861 "compare_and_write": false, 00:22:17.861 "abort": false, 00:22:17.861 "seek_hole": true, 00:22:17.861 "seek_data": true, 00:22:17.861 "copy": false, 00:22:17.861 "nvme_iov_md": false 00:22:17.861 }, 00:22:17.861 "driver_specific": { 00:22:17.861 "lvol": { 00:22:17.861 "lvol_store_uuid": "5314e2a7-e0b8-4b94-af46-e25d18eaa649", 00:22:17.861 "base_bdev": "nvme0n1", 00:22:17.861 "thin_provision": true, 00:22:17.861 "num_allocated_clusters": 0, 00:22:17.861 "snapshot": false, 00:22:17.861 "clone": false, 00:22:17.861 "esnap_clone": false 00:22:17.861 } 00:22:17.861 } 00:22:17.861 } 00:22:17.861 ]' 00:22:17.861 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:17.861 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:17.861 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:17.861 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:17.861 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:17.861 14:54:55 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:17.861 14:54:55 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:17.861 14:54:55 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 057d9cc6-df31-4831-82e9-693c4b526c9a --l2p_dram_limit 10' 00:22:17.861 14:54:55 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:17.861 14:54:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:17.861 14:54:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:17.861 14:54:55 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:17.861 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:17.861 14:54:55 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 057d9cc6-df31-4831-82e9-693c4b526c9a --l2p_dram_limit 10 -c nvc0n1p0 00:22:18.123 [2024-12-09 14:54:56.063081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.063127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:18.123 [2024-12-09 14:54:56.063140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:18.123 [2024-12-09 14:54:56.063147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.063196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.063205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.123 [2024-12-09 14:54:56.063213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:18.123 [2024-12-09 14:54:56.063219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.063239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:18.123 [2024-12-09 14:54:56.063861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:18.123 [2024-12-09 14:54:56.063887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.063898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.123 [2024-12-09 14:54:56.063908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:22:18.123 [2024-12-09 14:54:56.063914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.063939] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9d275a0e-c7d7-4199-8bd3-cc8b877c7a19 00:22:18.123 [2024-12-09 14:54:56.064953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.064982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:18.123 [2024-12-09 14:54:56.064990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:18.123 [2024-12-09 14:54:56.065001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.070068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.070203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.123 [2024-12-09 14:54:56.070216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.012 ms 00:22:18.123 [2024-12-09 14:54:56.070223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.070294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.070303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.123 [2024-12-09 14:54:56.070309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:18.123 [2024-12-09 14:54:56.070319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.070353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.070363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:18.123 [2024-12-09 14:54:56.070370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:18.123 [2024-12-09 14:54:56.070378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.070395] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:18.123 [2024-12-09 14:54:56.073376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.073484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.123 [2024-12-09 14:54:56.073501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.985 ms 00:22:18.123 [2024-12-09 14:54:56.073507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.073539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.073545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:18.123 [2024-12-09 14:54:56.073553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:18.123 [2024-12-09 14:54:56.073558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.073585] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:18.123 [2024-12-09 14:54:56.073696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:18.123 [2024-12-09 14:54:56.073708] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:18.123 [2024-12-09 14:54:56.073716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:18.123 [2024-12-09 14:54:56.073726] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:18.123 [2024-12-09 14:54:56.073733] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:18.123 [2024-12-09 14:54:56.073741] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:18.123 [2024-12-09 14:54:56.073746] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:18.123 [2024-12-09 14:54:56.073756] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:18.123 [2024-12-09 14:54:56.073761] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:18.123 [2024-12-09 14:54:56.073769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.073779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:18.123 [2024-12-09 14:54:56.073787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:22:18.123 [2024-12-09 14:54:56.073792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.073872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.123 [2024-12-09 14:54:56.073880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:18.123 [2024-12-09 14:54:56.073887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:18.123 [2024-12-09 14:54:56.073892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.123 [2024-12-09 14:54:56.073973] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:18.123 [2024-12-09 14:54:56.073980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:18.123 [2024-12-09 14:54:56.073988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.123 [2024-12-09 14:54:56.073994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:18.123 [2024-12-09 14:54:56.074007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:18.123 [2024-12-09 14:54:56.074018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:18.123 [2024-12-09 14:54:56.074025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.123 [2024-12-09 14:54:56.074038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:18.123 [2024-12-09 14:54:56.074043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:18.123 [2024-12-09 14:54:56.074050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.123 [2024-12-09 14:54:56.074056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:18.123 [2024-12-09 14:54:56.074062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:18.123 [2024-12-09 14:54:56.074067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:18.123 [2024-12-09 14:54:56.074079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:18.123 [2024-12-09 14:54:56.074086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:18.123 [2024-12-09 14:54:56.074097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.123 [2024-12-09 14:54:56.074108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:18.123 [2024-12-09 14:54:56.074113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.123 [2024-12-09 14:54:56.074126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:18.123 [2024-12-09 14:54:56.074132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.123 [2024-12-09 14:54:56.074143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:18.123 [2024-12-09 14:54:56.074148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.123 [2024-12-09 14:54:56.074160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:18.123 [2024-12-09 14:54:56.074167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:18.123 [2024-12-09 14:54:56.074173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.123 [2024-12-09 14:54:56.074179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:18.123 [2024-12-09 14:54:56.074184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:18.123 [2024-12-09 14:54:56.074191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.123 [2024-12-09 14:54:56.074196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:18.124 [2024-12-09 14:54:56.074203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:18.124 [2024-12-09 14:54:56.074207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.124 [2024-12-09 14:54:56.074214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:18.124 [2024-12-09 14:54:56.074219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:18.124 [2024-12-09 14:54:56.074225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.124 [2024-12-09 14:54:56.074230] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:18.124 [2024-12-09 14:54:56.074237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:18.124 [2024-12-09 14:54:56.074242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.124 [2024-12-09 14:54:56.074249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.124 [2024-12-09 14:54:56.074256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:18.124 [2024-12-09 14:54:56.074263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:18.124 [2024-12-09 14:54:56.074268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:18.124 [2024-12-09 14:54:56.074275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:18.124 [2024-12-09 14:54:56.074280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:18.124 [2024-12-09 14:54:56.074286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:18.124 [2024-12-09 14:54:56.074292] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:18.124 [2024-12-09 14:54:56.074302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.124 [2024-12-09 14:54:56.074308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:18.124 [2024-12-09 14:54:56.074315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:18.124 [2024-12-09 14:54:56.074322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:18.124 [2024-12-09 14:54:56.074329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:18.124 [2024-12-09 14:54:56.074335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:18.124 [2024-12-09 14:54:56.074341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:18.124 [2024-12-09 14:54:56.074347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:18.124 [2024-12-09 14:54:56.074354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:18.124 [2024-12-09 14:54:56.074359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:18.124 [2024-12-09 14:54:56.074367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:18.124 [2024-12-09 14:54:56.074372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:18.124 [2024-12-09 14:54:56.074379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:18.124 [2024-12-09 14:54:56.074384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:18.124 [2024-12-09 14:54:56.074391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:18.124 [2024-12-09 14:54:56.074397] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:18.124 [2024-12-09 14:54:56.074404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.124 [2024-12-09 14:54:56.074410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:18.124 [2024-12-09 14:54:56.074417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:18.124 [2024-12-09 14:54:56.074422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:18.124 [2024-12-09 14:54:56.074429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:18.124 [2024-12-09 14:54:56.074435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.124 [2024-12-09 14:54:56.074442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:18.124 [2024-12-09 14:54:56.074448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:22:18.124 [2024-12-09 14:54:56.074454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.124 [2024-12-09 14:54:56.074495] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:18.124 [2024-12-09 14:54:56.074506] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:22.325 [2024-12-09 14:54:59.770733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.770897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:22.325 [2024-12-09 14:54:59.770921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3696.220 ms 00:22:22.325 [2024-12-09 14:54:59.770935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.808645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.808724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:22.325 [2024-12-09 14:54:59.808740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.442 ms 00:22:22.325 [2024-12-09 14:54:59.808752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.808928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.808947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:22.325 [2024-12-09 14:54:59.808958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:22:22.325 [2024-12-09 14:54:59.808978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.845875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.845916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.325 [2024-12-09 14:54:59.845927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.843 ms 00:22:22.325 [2024-12-09 14:54:59.845937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.845964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.845978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.325 [2024-12-09 14:54:59.845987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:22.325 [2024-12-09 14:54:59.846003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.846448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.846469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.325 [2024-12-09 14:54:59.846480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:22:22.325 [2024-12-09 14:54:59.846491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.846596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.846608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.325 [2024-12-09 14:54:59.846619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:22.325 [2024-12-09 14:54:59.846631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.862522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.862558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.325 [2024-12-09 14:54:59.862568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.874 ms 00:22:22.325 [2024-12-09 14:54:59.862578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.890830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:22.325 [2024-12-09 14:54:59.894595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.894632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:22.325 [2024-12-09 14:54:59.894650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.945 ms 00:22:22.325 [2024-12-09 14:54:59.894660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.973944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.973989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:22.325 [2024-12-09 14:54:59.974005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.242 ms 00:22:22.325 [2024-12-09 14:54:59.974013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.974208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.974223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:22.325 [2024-12-09 14:54:59.974237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:22:22.325 [2024-12-09 14:54:59.974245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:54:59.998143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:54:59.998176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:22.325 [2024-12-09 14:54:59.998190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.850 ms 00:22:22.325 [2024-12-09 14:54:59.998199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.325 [2024-12-09 14:55:00.020841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.325 [2024-12-09 14:55:00.021043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:22.325 [2024-12-09 14:55:00.021067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.601 ms 00:22:22.325 [2024-12-09 14:55:00.021075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.021658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.326 [2024-12-09 14:55:00.021676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:22.326 [2024-12-09 14:55:00.021687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:22:22.326 [2024-12-09 14:55:00.021698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.096051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.326 [2024-12-09 14:55:00.096241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:22.326 [2024-12-09 14:55:00.096270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.314 ms 00:22:22.326 [2024-12-09 14:55:00.096280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.121778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.326 [2024-12-09 14:55:00.121823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:22.326 [2024-12-09 14:55:00.121838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.434 ms 00:22:22.326 [2024-12-09 14:55:00.121846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.145870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.326 [2024-12-09 14:55:00.145903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:22.326 [2024-12-09 14:55:00.145917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.998 ms 00:22:22.326 [2024-12-09 14:55:00.145925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.169659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.326 [2024-12-09 14:55:00.169695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:22.326 [2024-12-09 14:55:00.169709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.709 ms 00:22:22.326 [2024-12-09 14:55:00.169717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.169745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.326 [2024-12-09 14:55:00.169754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:22.326 [2024-12-09 14:55:00.169768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:22.326 [2024-12-09 14:55:00.169776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.169870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.326 [2024-12-09 14:55:00.169884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:22.326 [2024-12-09 14:55:00.169894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:22.326 [2024-12-09 14:55:00.169902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.326 [2024-12-09 14:55:00.170884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4107.295 ms, result 0 00:22:22.326 { 00:22:22.326 "name": "ftl0", 00:22:22.326 "uuid": "9d275a0e-c7d7-4199-8bd3-cc8b877c7a19" 00:22:22.326 } 00:22:22.326 14:55:00 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:22.326 14:55:00 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:22.326 14:55:00 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:22.326 14:55:00 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:22.587 [2024-12-09 14:55:00.594397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.594633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:22.587 [2024-12-09 14:55:00.594658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:22.587 [2024-12-09 14:55:00.594671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.594704] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:22.587 [2024-12-09 14:55:00.597964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.598010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:22.587 [2024-12-09 14:55:00.598026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:22:22.587 [2024-12-09 14:55:00.598035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.598338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.598355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:22.587 [2024-12-09 14:55:00.598369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:22:22.587 [2024-12-09 14:55:00.598376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.601662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.601849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:22.587 [2024-12-09 14:55:00.601871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:22:22.587 [2024-12-09 14:55:00.601881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.608178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.608219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:22.587 [2024-12-09 14:55:00.608237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.265 ms 00:22:22.587 [2024-12-09 14:55:00.608247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.633853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.633902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:22.587 [2024-12-09 14:55:00.633919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.523 ms 00:22:22.587 [2024-12-09 14:55:00.633927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.652423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.652627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:22.587 [2024-12-09 14:55:00.652656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.433 ms 00:22:22.587 [2024-12-09 14:55:00.652666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.652915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.652929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:22.587 [2024-12-09 14:55:00.652943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:22:22.587 [2024-12-09 14:55:00.652952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.679806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.679853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:22.587 [2024-12-09 14:55:00.679868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.815 ms 00:22:22.587 [2024-12-09 14:55:00.679875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.587 [2024-12-09 14:55:00.705710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.587 [2024-12-09 14:55:00.705771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:22.587 [2024-12-09 14:55:00.705787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.780 ms 00:22:22.587 [2024-12-09 14:55:00.705795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.848 [2024-12-09 14:55:00.731176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.848 [2024-12-09 14:55:00.731374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:22.848 [2024-12-09 14:55:00.731400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.305 ms 00:22:22.848 [2024-12-09 14:55:00.731409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.848 [2024-12-09 14:55:00.756425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.848 [2024-12-09 14:55:00.756470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:22.848 [2024-12-09 14:55:00.756485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.843 ms 00:22:22.848 [2024-12-09 14:55:00.756493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.848 [2024-12-09 14:55:00.756544] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:22.848 [2024-12-09 14:55:00.756562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:22.848 [2024-12-09 14:55:00.756671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.756990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:22.849 [2024-12-09 14:55:00.757579] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:22.849 [2024-12-09 14:55:00.757590] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d275a0e-c7d7-4199-8bd3-cc8b877c7a19 00:22:22.850 [2024-12-09 14:55:00.757598] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:22.850 [2024-12-09 14:55:00.757611] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:22.850 [2024-12-09 14:55:00.757621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:22.850 [2024-12-09 14:55:00.757631] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:22.850 [2024-12-09 14:55:00.757639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:22.850 [2024-12-09 14:55:00.757649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:22.850 [2024-12-09 14:55:00.757657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:22.850 [2024-12-09 14:55:00.757666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:22.850 [2024-12-09 14:55:00.757672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:22.850 [2024-12-09 14:55:00.757683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.850 [2024-12-09 14:55:00.757692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:22.850 [2024-12-09 14:55:00.757703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:22:22.850 [2024-12-09 14:55:00.757714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.850 [2024-12-09 14:55:00.773360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.850 [2024-12-09 14:55:00.773404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:22.850 [2024-12-09 14:55:00.773419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.598 ms 00:22:22.850 [2024-12-09 14:55:00.773427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.850 [2024-12-09 14:55:00.773898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.850 [2024-12-09 14:55:00.773914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:22.850 [2024-12-09 14:55:00.773929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:22:22.850 [2024-12-09 14:55:00.773938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.850 [2024-12-09 14:55:00.824238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.850 [2024-12-09 14:55:00.824290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.850 [2024-12-09 14:55:00.824306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.850 [2024-12-09 14:55:00.824315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.850 [2024-12-09 14:55:00.824393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.850 [2024-12-09 14:55:00.824402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.850 [2024-12-09 14:55:00.824417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.850 [2024-12-09 14:55:00.824425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.850 [2024-12-09 14:55:00.824515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.850 [2024-12-09 14:55:00.824529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.850 [2024-12-09 14:55:00.824540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.850 [2024-12-09 14:55:00.824549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.850 [2024-12-09 14:55:00.824573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.850 [2024-12-09 14:55:00.824582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.850 [2024-12-09 14:55:00.824593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.850 [2024-12-09 14:55:00.824603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.850 [2024-12-09 14:55:00.916581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.850 [2024-12-09 14:55:00.916911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.850 [2024-12-09 14:55:00.916942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.850 [2024-12-09 14:55:00.916952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.992358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.110 [2024-12-09 14:55:00.992619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.110 [2024-12-09 14:55:00.992644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.110 [2024-12-09 14:55:00.992658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.992833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.110 [2024-12-09 14:55:00.992846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.110 [2024-12-09 14:55:00.992859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.110 [2024-12-09 14:55:00.992868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.992929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.110 [2024-12-09 14:55:00.992941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.110 [2024-12-09 14:55:00.992954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.110 [2024-12-09 14:55:00.992963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.993095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.110 [2024-12-09 14:55:00.993108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.110 [2024-12-09 14:55:00.993120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.110 [2024-12-09 14:55:00.993130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.993172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.110 [2024-12-09 14:55:00.993183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.110 [2024-12-09 14:55:00.993194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.110 [2024-12-09 14:55:00.993203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.993262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.110 [2024-12-09 14:55:00.993274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.110 [2024-12-09 14:55:00.993285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.110 [2024-12-09 14:55:00.993294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.993360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.110 [2024-12-09 14:55:00.993373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.110 [2024-12-09 14:55:00.993385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.110 [2024-12-09 14:55:00.993393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.110 [2024-12-09 14:55:00.993580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 399.123 ms, result 0 00:22:23.110 true 00:22:23.110 14:55:01 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78711 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78711 ']' 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78711 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78711 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78711' 00:22:23.110 killing process with pid 78711 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78711 00:22:23.110 14:55:01 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78711 00:22:29.691 14:55:07 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:33.891 262144+0 records in 00:22:33.891 262144+0 records out 00:22:33.891 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28889 s, 250 MB/s 00:22:33.891 14:55:11 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:35.274 14:55:13 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:35.534 [2024-12-09 14:55:13.407245] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:22:35.535 [2024-12-09 14:55:13.407332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78956 ] 00:22:35.535 [2024-12-09 14:55:13.557666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.796 [2024-12-09 14:55:13.666196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.059 [2024-12-09 14:55:13.962242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:36.059 [2024-12-09 14:55:13.962337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:36.059 [2024-12-09 14:55:14.124164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.124419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:36.059 [2024-12-09 14:55:14.124445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:36.059 [2024-12-09 14:55:14.124455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.124529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.124544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:36.059 [2024-12-09 14:55:14.124553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:36.059 [2024-12-09 14:55:14.124561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.124583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:36.059 [2024-12-09 14:55:14.125362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:36.059 [2024-12-09 14:55:14.125391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.125399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:36.059 [2024-12-09 14:55:14.125409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:22:36.059 [2024-12-09 14:55:14.125417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.127221] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:36.059 [2024-12-09 14:55:14.141675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.141732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:36.059 [2024-12-09 14:55:14.141747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.457 ms 00:22:36.059 [2024-12-09 14:55:14.141755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.141867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.141880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:36.059 [2024-12-09 14:55:14.141890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:36.059 [2024-12-09 14:55:14.141898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.150324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.150373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:36.059 [2024-12-09 14:55:14.150384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.320 ms 00:22:36.059 [2024-12-09 14:55:14.150398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.150482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.150491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:36.059 [2024-12-09 14:55:14.150500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:36.059 [2024-12-09 14:55:14.150508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.150556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.150566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:36.059 [2024-12-09 14:55:14.150575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:36.059 [2024-12-09 14:55:14.150582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.150609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:36.059 [2024-12-09 14:55:14.154704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.154746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:36.059 [2024-12-09 14:55:14.154760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.101 ms 00:22:36.059 [2024-12-09 14:55:14.154768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.154827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.154837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:36.059 [2024-12-09 14:55:14.154847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:36.059 [2024-12-09 14:55:14.154855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.154934] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:36.059 [2024-12-09 14:55:14.154961] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:36.059 [2024-12-09 14:55:14.154999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:36.059 [2024-12-09 14:55:14.155018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:36.059 [2024-12-09 14:55:14.155126] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:36.059 [2024-12-09 14:55:14.155138] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:36.059 [2024-12-09 14:55:14.155150] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:36.059 [2024-12-09 14:55:14.155162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:36.059 [2024-12-09 14:55:14.155171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:36.059 [2024-12-09 14:55:14.155179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:36.059 [2024-12-09 14:55:14.155188] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:36.059 [2024-12-09 14:55:14.155198] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:36.059 [2024-12-09 14:55:14.155206] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:36.059 [2024-12-09 14:55:14.155215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.155223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:36.059 [2024-12-09 14:55:14.155231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:36.059 [2024-12-09 14:55:14.155239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.155327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.059 [2024-12-09 14:55:14.155335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:36.059 [2024-12-09 14:55:14.155344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:36.059 [2024-12-09 14:55:14.155351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.059 [2024-12-09 14:55:14.155457] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:36.059 [2024-12-09 14:55:14.155467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:36.059 [2024-12-09 14:55:14.155476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:36.059 [2024-12-09 14:55:14.155484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:36.060 [2024-12-09 14:55:14.155499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:36.060 [2024-12-09 14:55:14.155523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:36.060 [2024-12-09 14:55:14.155537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:36.060 [2024-12-09 14:55:14.155544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:36.060 [2024-12-09 14:55:14.155551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:36.060 [2024-12-09 14:55:14.155563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:36.060 [2024-12-09 14:55:14.155570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:36.060 [2024-12-09 14:55:14.155577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:36.060 [2024-12-09 14:55:14.155591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:36.060 [2024-12-09 14:55:14.155612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:36.060 [2024-12-09 14:55:14.155631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:36.060 [2024-12-09 14:55:14.155650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:36.060 [2024-12-09 14:55:14.155670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:36.060 [2024-12-09 14:55:14.155690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:36.060 [2024-12-09 14:55:14.155702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:36.060 [2024-12-09 14:55:14.155709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:36.060 [2024-12-09 14:55:14.155715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:36.060 [2024-12-09 14:55:14.155723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:36.060 [2024-12-09 14:55:14.155730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:36.060 [2024-12-09 14:55:14.155737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:36.060 [2024-12-09 14:55:14.155753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:36.060 [2024-12-09 14:55:14.155760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155766] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:36.060 [2024-12-09 14:55:14.155774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:36.060 [2024-12-09 14:55:14.155781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.060 [2024-12-09 14:55:14.155797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:36.060 [2024-12-09 14:55:14.155820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:36.060 [2024-12-09 14:55:14.155827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:36.060 [2024-12-09 14:55:14.155835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:36.060 [2024-12-09 14:55:14.155842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:36.060 [2024-12-09 14:55:14.155849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:36.060 [2024-12-09 14:55:14.155858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:36.060 [2024-12-09 14:55:14.155867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:36.060 [2024-12-09 14:55:14.155879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:36.060 [2024-12-09 14:55:14.155886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:36.060 [2024-12-09 14:55:14.155893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:36.060 [2024-12-09 14:55:14.155901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:36.060 [2024-12-09 14:55:14.155907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:36.060 [2024-12-09 14:55:14.155914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:36.060 [2024-12-09 14:55:14.155922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:36.060 [2024-12-09 14:55:14.155928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:36.060 [2024-12-09 14:55:14.155936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:36.060 [2024-12-09 14:55:14.155943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:36.060 [2024-12-09 14:55:14.155950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:36.060 [2024-12-09 14:55:14.155957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:36.060 [2024-12-09 14:55:14.155965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:36.060 [2024-12-09 14:55:14.155979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:36.060 [2024-12-09 14:55:14.155987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:36.060 [2024-12-09 14:55:14.155995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:36.060 [2024-12-09 14:55:14.156003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:36.060 [2024-12-09 14:55:14.156011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:36.060 [2024-12-09 14:55:14.156019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:36.060 [2024-12-09 14:55:14.156026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:36.060 [2024-12-09 14:55:14.156033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.060 [2024-12-09 14:55:14.156041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:36.060 [2024-12-09 14:55:14.156049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:22:36.060 [2024-12-09 14:55:14.156057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.322 [2024-12-09 14:55:14.188980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-09 14:55:14.189197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:36.322 [2024-12-09 14:55:14.189218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.874 ms 00:22:36.322 [2024-12-09 14:55:14.189235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.322 [2024-12-09 14:55:14.189333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-09 14:55:14.189342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:36.322 [2024-12-09 14:55:14.189351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:36.322 [2024-12-09 14:55:14.189359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.322 [2024-12-09 14:55:14.237827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-09 14:55:14.237885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:36.322 [2024-12-09 14:55:14.237899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.402 ms 00:22:36.322 [2024-12-09 14:55:14.237908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.322 [2024-12-09 14:55:14.237963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-09 14:55:14.237974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:36.322 [2024-12-09 14:55:14.237987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:36.322 [2024-12-09 14:55:14.237995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.322 [2024-12-09 14:55:14.238589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-09 14:55:14.238614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:36.322 [2024-12-09 14:55:14.238625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:22:36.322 [2024-12-09 14:55:14.238634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.238839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.238852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:36.323 [2024-12-09 14:55:14.238868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:22:36.323 [2024-12-09 14:55:14.238902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.255217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.255269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:36.323 [2024-12-09 14:55:14.255281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.290 ms 00:22:36.323 [2024-12-09 14:55:14.255290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.270152] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:36.323 [2024-12-09 14:55:14.270365] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:36.323 [2024-12-09 14:55:14.270388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.270397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:36.323 [2024-12-09 14:55:14.270408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.981 ms 00:22:36.323 [2024-12-09 14:55:14.270416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.296946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.297156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:36.323 [2024-12-09 14:55:14.297179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.451 ms 00:22:36.323 [2024-12-09 14:55:14.297187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.311039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.311091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:36.323 [2024-12-09 14:55:14.311104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.722 ms 00:22:36.323 [2024-12-09 14:55:14.311113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.323854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.323896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:36.323 [2024-12-09 14:55:14.323909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.687 ms 00:22:36.323 [2024-12-09 14:55:14.323918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.324609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.324634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:36.323 [2024-12-09 14:55:14.324646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:22:36.323 [2024-12-09 14:55:14.324657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.390553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.390625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:36.323 [2024-12-09 14:55:14.390648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.871 ms 00:22:36.323 [2024-12-09 14:55:14.390668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.403098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:36.323 [2024-12-09 14:55:14.406513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.406555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:36.323 [2024-12-09 14:55:14.406567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.780 ms 00:22:36.323 [2024-12-09 14:55:14.406576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.406675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.406686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:36.323 [2024-12-09 14:55:14.406696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:36.323 [2024-12-09 14:55:14.406706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.406783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.406794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:36.323 [2024-12-09 14:55:14.406835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:36.323 [2024-12-09 14:55:14.406844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.406867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.406901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:36.323 [2024-12-09 14:55:14.406911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:36.323 [2024-12-09 14:55:14.406919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.406959] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:36.323 [2024-12-09 14:55:14.406972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.406981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:36.323 [2024-12-09 14:55:14.406989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:36.323 [2024-12-09 14:55:14.406997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.432996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.433042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:36.323 [2024-12-09 14:55:14.433056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.979 ms 00:22:36.323 [2024-12-09 14:55:14.433071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.433160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.323 [2024-12-09 14:55:14.433171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:36.323 [2024-12-09 14:55:14.433181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:36.323 [2024-12-09 14:55:14.433189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-09 14:55:14.434483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.826 ms, result 0 00:22:37.707  [2024-12-09T14:55:16.762Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-09T14:55:17.701Z] Copying: 54/1024 [MB] (35 MBps) [2024-12-09T14:55:18.644Z] Copying: 96/1024 [MB] (42 MBps) [2024-12-09T14:55:19.587Z] Copying: 113/1024 [MB] (17 MBps) [2024-12-09T14:55:20.532Z] Copying: 129/1024 [MB] (15 MBps) [2024-12-09T14:55:21.477Z] Copying: 147/1024 [MB] (18 MBps) [2024-12-09T14:55:22.864Z] Copying: 165/1024 [MB] (17 MBps) [2024-12-09T14:55:23.807Z] Copying: 189/1024 [MB] (23 MBps) [2024-12-09T14:55:24.753Z] Copying: 204/1024 [MB] (15 MBps) [2024-12-09T14:55:25.696Z] Copying: 215/1024 [MB] (10 MBps) [2024-12-09T14:55:26.675Z] Copying: 230/1024 [MB] (15 MBps) [2024-12-09T14:55:27.645Z] Copying: 248/1024 [MB] (17 MBps) [2024-12-09T14:55:28.587Z] Copying: 268/1024 [MB] (20 MBps) [2024-12-09T14:55:29.526Z] Copying: 280/1024 [MB] (11 MBps) [2024-12-09T14:55:30.460Z] Copying: 310/1024 [MB] (30 MBps) [2024-12-09T14:55:31.850Z] Copying: 341/1024 [MB] (30 MBps) [2024-12-09T14:55:32.786Z] Copying: 388/1024 [MB] (47 MBps) [2024-12-09T14:55:33.725Z] Copying: 435/1024 [MB] (46 MBps) [2024-12-09T14:55:34.672Z] Copying: 473/1024 [MB] (38 MBps) [2024-12-09T14:55:35.620Z] Copying: 491/1024 [MB] (17 MBps) [2024-12-09T14:55:36.564Z] Copying: 512/1024 [MB] (20 MBps) [2024-12-09T14:55:37.509Z] Copying: 531/1024 [MB] (19 MBps) [2024-12-09T14:55:38.454Z] Copying: 554/1024 [MB] (22 MBps) [2024-12-09T14:55:39.522Z] Copying: 572/1024 [MB] (17 MBps) [2024-12-09T14:55:40.472Z] Copying: 596208/1048576 [kB] (10200 kBps) [2024-12-09T14:55:41.852Z] Copying: 592/1024 [MB] (10 MBps) [2024-12-09T14:55:42.793Z] Copying: 614/1024 [MB] (21 MBps) [2024-12-09T14:55:43.738Z] Copying: 632/1024 [MB] (18 MBps) [2024-12-09T14:55:44.682Z] Copying: 652/1024 [MB] (20 MBps) [2024-12-09T14:55:45.623Z] Copying: 671/1024 [MB] (19 MBps) [2024-12-09T14:55:46.568Z] Copying: 684/1024 [MB] (12 MBps) [2024-12-09T14:55:47.514Z] Copying: 700/1024 [MB] (15 MBps) [2024-12-09T14:55:48.457Z] Copying: 719/1024 [MB] (18 MBps) [2024-12-09T14:55:49.837Z] Copying: 739/1024 [MB] (20 MBps) [2024-12-09T14:55:50.775Z] Copying: 768/1024 [MB] (29 MBps) [2024-12-09T14:55:51.717Z] Copying: 799/1024 [MB] (30 MBps) [2024-12-09T14:55:52.662Z] Copying: 819/1024 [MB] (19 MBps) [2024-12-09T14:55:53.608Z] Copying: 840/1024 [MB] (20 MBps) [2024-12-09T14:55:54.552Z] Copying: 859/1024 [MB] (19 MBps) [2024-12-09T14:55:55.494Z] Copying: 878/1024 [MB] (19 MBps) [2024-12-09T14:55:56.877Z] Copying: 897/1024 [MB] (18 MBps) [2024-12-09T14:55:57.812Z] Copying: 926/1024 [MB] (29 MBps) [2024-12-09T14:55:58.752Z] Copying: 973/1024 [MB] (46 MBps) [2024-12-09T14:55:59.328Z] Copying: 1009/1024 [MB] (36 MBps) [2024-12-09T14:55:59.328Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 14:55:59.071479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.071534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.206 [2024-12-09 14:55:59.071550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:21.206 [2024-12-09 14:55:59.071561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.071584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.206 [2024-12-09 14:55:59.074557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.074600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.206 [2024-12-09 14:55:59.074619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:23:21.206 [2024-12-09 14:55:59.074627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.077351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.077395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.206 [2024-12-09 14:55:59.077406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:23:21.206 [2024-12-09 14:55:59.077415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.095966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.096012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.206 [2024-12-09 14:55:59.096024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.534 ms 00:23:21.206 [2024-12-09 14:55:59.096032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.102194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.102234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:21.206 [2024-12-09 14:55:59.102246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.109 ms 00:23:21.206 [2024-12-09 14:55:59.102254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.128704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.128748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.206 [2024-12-09 14:55:59.128760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.391 ms 00:23:21.206 [2024-12-09 14:55:59.128776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.145157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.145202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.206 [2024-12-09 14:55:59.145214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.323 ms 00:23:21.206 [2024-12-09 14:55:59.145222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.145389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.145405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.206 [2024-12-09 14:55:59.145416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:23:21.206 [2024-12-09 14:55:59.145423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.171303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.171346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:21.206 [2024-12-09 14:55:59.171357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.864 ms 00:23:21.206 [2024-12-09 14:55:59.171364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.196382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.196422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:21.206 [2024-12-09 14:55:59.196433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.974 ms 00:23:21.206 [2024-12-09 14:55:59.196439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.221093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.221137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:21.206 [2024-12-09 14:55:59.221148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.612 ms 00:23:21.206 [2024-12-09 14:55:59.221155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.245659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.206 [2024-12-09 14:55:59.245704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:21.206 [2024-12-09 14:55:59.245716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.434 ms 00:23:21.206 [2024-12-09 14:55:59.245723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.206 [2024-12-09 14:55:59.245766] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:21.206 [2024-12-09 14:55:59.245783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.245997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:21.206 [2024-12-09 14:55:59.246145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:21.207 [2024-12-09 14:55:59.246566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:21.207 [2024-12-09 14:55:59.246577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d275a0e-c7d7-4199-8bd3-cc8b877c7a19 00:23:21.207 [2024-12-09 14:55:59.246585] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:21.207 [2024-12-09 14:55:59.246592] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:21.207 [2024-12-09 14:55:59.246599] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:21.207 [2024-12-09 14:55:59.246607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:21.207 [2024-12-09 14:55:59.246614] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:21.207 [2024-12-09 14:55:59.246627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:21.207 [2024-12-09 14:55:59.246635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:21.207 [2024-12-09 14:55:59.246641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:21.207 [2024-12-09 14:55:59.246647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:21.207 [2024-12-09 14:55:59.246655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.207 [2024-12-09 14:55:59.246662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:21.207 [2024-12-09 14:55:59.246671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:23:21.207 [2024-12-09 14:55:59.246678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.207 [2024-12-09 14:55:59.260024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.207 [2024-12-09 14:55:59.260062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:21.207 [2024-12-09 14:55:59.260073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.303 ms 00:23:21.207 [2024-12-09 14:55:59.260081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.207 [2024-12-09 14:55:59.260481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.207 [2024-12-09 14:55:59.260503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:21.207 [2024-12-09 14:55:59.260512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:23:21.207 [2024-12-09 14:55:59.260528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.207 [2024-12-09 14:55:59.296793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.207 [2024-12-09 14:55:59.296847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.207 [2024-12-09 14:55:59.296858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.207 [2024-12-09 14:55:59.296867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.207 [2024-12-09 14:55:59.296931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.207 [2024-12-09 14:55:59.296940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.207 [2024-12-09 14:55:59.296948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.207 [2024-12-09 14:55:59.296962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.207 [2024-12-09 14:55:59.297023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.207 [2024-12-09 14:55:59.297033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.207 [2024-12-09 14:55:59.297041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.207 [2024-12-09 14:55:59.297050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.207 [2024-12-09 14:55:59.297066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.207 [2024-12-09 14:55:59.297074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.207 [2024-12-09 14:55:59.297082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.207 [2024-12-09 14:55:59.297090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.467 [2024-12-09 14:55:59.381835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.467 [2024-12-09 14:55:59.381890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.467 [2024-12-09 14:55:59.381905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.467 [2024-12-09 14:55:59.381913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.467 [2024-12-09 14:55:59.451372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.467 [2024-12-09 14:55:59.451422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.467 [2024-12-09 14:55:59.451434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.467 [2024-12-09 14:55:59.451449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.467 [2024-12-09 14:55:59.451531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.467 [2024-12-09 14:55:59.451542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.467 [2024-12-09 14:55:59.451551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.467 [2024-12-09 14:55:59.451560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.467 [2024-12-09 14:55:59.451600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.467 [2024-12-09 14:55:59.451610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.467 [2024-12-09 14:55:59.451620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.467 [2024-12-09 14:55:59.451629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.467 [2024-12-09 14:55:59.451733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.467 [2024-12-09 14:55:59.451744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.467 [2024-12-09 14:55:59.451752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.467 [2024-12-09 14:55:59.451761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.467 [2024-12-09 14:55:59.451794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.467 [2024-12-09 14:55:59.451842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:21.467 [2024-12-09 14:55:59.451852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.467 [2024-12-09 14:55:59.451859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.467 [2024-12-09 14:55:59.451903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.468 [2024-12-09 14:55:59.451917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.468 [2024-12-09 14:55:59.451925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.468 [2024-12-09 14:55:59.451933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.468 [2024-12-09 14:55:59.451979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.468 [2024-12-09 14:55:59.451989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.468 [2024-12-09 14:55:59.451998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.468 [2024-12-09 14:55:59.452006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.468 [2024-12-09 14:55:59.452142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.623 ms, result 0 00:23:22.037 00:23:22.037 00:23:22.037 14:56:00 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:22.037 [2024-12-09 14:56:00.122595] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:23:22.037 [2024-12-09 14:56:00.122721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79437 ] 00:23:22.298 [2024-12-09 14:56:00.281694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.298 [2024-12-09 14:56:00.397743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.873 [2024-12-09 14:56:00.694028] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:22.873 [2024-12-09 14:56:00.694120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:22.873 [2024-12-09 14:56:00.855060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.855127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:22.873 [2024-12-09 14:56:00.855142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:22.873 [2024-12-09 14:56:00.855151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.855208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.855222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.873 [2024-12-09 14:56:00.855231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:22.873 [2024-12-09 14:56:00.855239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.855261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:22.873 [2024-12-09 14:56:00.855966] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:22.873 [2024-12-09 14:56:00.855995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.856003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.873 [2024-12-09 14:56:00.856013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:23:22.873 [2024-12-09 14:56:00.856021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.857694] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:22.873 [2024-12-09 14:56:00.871753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.871812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:22.873 [2024-12-09 14:56:00.871827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.060 ms 00:23:22.873 [2024-12-09 14:56:00.871835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.871919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.871929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:22.873 [2024-12-09 14:56:00.871938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:22.873 [2024-12-09 14:56:00.871947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.879989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.880033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.873 [2024-12-09 14:56:00.880044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.964 ms 00:23:22.873 [2024-12-09 14:56:00.880058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.880138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.880147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.873 [2024-12-09 14:56:00.880156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:22.873 [2024-12-09 14:56:00.880165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.880207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.880218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:22.873 [2024-12-09 14:56:00.880227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:22.873 [2024-12-09 14:56:00.880234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.880260] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.873 [2024-12-09 14:56:00.884319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.884360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.873 [2024-12-09 14:56:00.884374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.063 ms 00:23:22.873 [2024-12-09 14:56:00.884382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.884421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.884430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:22.873 [2024-12-09 14:56:00.884439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:22.873 [2024-12-09 14:56:00.884447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.884499] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:22.873 [2024-12-09 14:56:00.884524] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:22.873 [2024-12-09 14:56:00.884561] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:22.873 [2024-12-09 14:56:00.884582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:22.873 [2024-12-09 14:56:00.884689] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:22.873 [2024-12-09 14:56:00.884700] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:22.873 [2024-12-09 14:56:00.884711] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:22.873 [2024-12-09 14:56:00.884721] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:22.873 [2024-12-09 14:56:00.884730] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:22.873 [2024-12-09 14:56:00.884738] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:22.873 [2024-12-09 14:56:00.884746] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:22.873 [2024-12-09 14:56:00.884757] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:22.873 [2024-12-09 14:56:00.884765] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:22.873 [2024-12-09 14:56:00.884773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.884781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:22.873 [2024-12-09 14:56:00.884789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:22.873 [2024-12-09 14:56:00.884796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.884898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.873 [2024-12-09 14:56:00.884906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:22.873 [2024-12-09 14:56:00.884914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:22.873 [2024-12-09 14:56:00.884921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.873 [2024-12-09 14:56:00.885032] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:22.873 [2024-12-09 14:56:00.885042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:22.873 [2024-12-09 14:56:00.885051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.873 [2024-12-09 14:56:00.885060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.873 [2024-12-09 14:56:00.885068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:22.873 [2024-12-09 14:56:00.885075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:22.873 [2024-12-09 14:56:00.885084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:22.873 [2024-12-09 14:56:00.885091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:22.873 [2024-12-09 14:56:00.885099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:22.873 [2024-12-09 14:56:00.885106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.873 [2024-12-09 14:56:00.885113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:22.873 [2024-12-09 14:56:00.885120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:22.873 [2024-12-09 14:56:00.885126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.873 [2024-12-09 14:56:00.885141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:22.873 [2024-12-09 14:56:00.885148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:22.873 [2024-12-09 14:56:00.885155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.873 [2024-12-09 14:56:00.885162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:22.873 [2024-12-09 14:56:00.885169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:22.873 [2024-12-09 14:56:00.885176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.873 [2024-12-09 14:56:00.885183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:22.873 [2024-12-09 14:56:00.885190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:22.873 [2024-12-09 14:56:00.885196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.873 [2024-12-09 14:56:00.885203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:22.873 [2024-12-09 14:56:00.885210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:22.874 [2024-12-09 14:56:00.885217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.874 [2024-12-09 14:56:00.885224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:22.874 [2024-12-09 14:56:00.885231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:22.874 [2024-12-09 14:56:00.885237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.874 [2024-12-09 14:56:00.885244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:22.874 [2024-12-09 14:56:00.885251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:22.874 [2024-12-09 14:56:00.885259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.874 [2024-12-09 14:56:00.885266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:22.874 [2024-12-09 14:56:00.885272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:22.874 [2024-12-09 14:56:00.885279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.874 [2024-12-09 14:56:00.885286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:22.874 [2024-12-09 14:56:00.885292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:22.874 [2024-12-09 14:56:00.885299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.874 [2024-12-09 14:56:00.885305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:22.874 [2024-12-09 14:56:00.885314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:22.874 [2024-12-09 14:56:00.885321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.874 [2024-12-09 14:56:00.885328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:22.874 [2024-12-09 14:56:00.885336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:22.874 [2024-12-09 14:56:00.885343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.874 [2024-12-09 14:56:00.885350] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:22.874 [2024-12-09 14:56:00.885358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:22.874 [2024-12-09 14:56:00.885366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.874 [2024-12-09 14:56:00.885373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.874 [2024-12-09 14:56:00.885390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:22.874 [2024-12-09 14:56:00.885397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:22.874 [2024-12-09 14:56:00.885405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:22.874 [2024-12-09 14:56:00.885412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:22.874 [2024-12-09 14:56:00.885419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:22.874 [2024-12-09 14:56:00.885425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:22.874 [2024-12-09 14:56:00.885434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:22.874 [2024-12-09 14:56:00.885443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.874 [2024-12-09 14:56:00.885454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:22.874 [2024-12-09 14:56:00.885462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:22.874 [2024-12-09 14:56:00.885470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:22.874 [2024-12-09 14:56:00.885478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:22.874 [2024-12-09 14:56:00.885486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:22.874 [2024-12-09 14:56:00.885493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:22.874 [2024-12-09 14:56:00.885500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:22.874 [2024-12-09 14:56:00.885507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:22.874 [2024-12-09 14:56:00.885514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:22.874 [2024-12-09 14:56:00.885521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:22.874 [2024-12-09 14:56:00.885529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:22.874 [2024-12-09 14:56:00.885536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:22.874 [2024-12-09 14:56:00.885543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:22.874 [2024-12-09 14:56:00.885551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:22.874 [2024-12-09 14:56:00.885558] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:22.874 [2024-12-09 14:56:00.885572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.874 [2024-12-09 14:56:00.885582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:22.874 [2024-12-09 14:56:00.885589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:22.874 [2024-12-09 14:56:00.885597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:22.874 [2024-12-09 14:56:00.885604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:22.874 [2024-12-09 14:56:00.885612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.885620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:22.874 [2024-12-09 14:56:00.885627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:23:22.874 [2024-12-09 14:56:00.885634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.874 [2024-12-09 14:56:00.917429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.917482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.874 [2024-12-09 14:56:00.917494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.748 ms 00:23:22.874 [2024-12-09 14:56:00.917505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.874 [2024-12-09 14:56:00.917596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.917605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:22.874 [2024-12-09 14:56:00.917613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:22.874 [2024-12-09 14:56:00.917622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.874 [2024-12-09 14:56:00.963967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.964020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.874 [2024-12-09 14:56:00.964033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.284 ms 00:23:22.874 [2024-12-09 14:56:00.964042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.874 [2024-12-09 14:56:00.964092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.964103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.874 [2024-12-09 14:56:00.964116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:22.874 [2024-12-09 14:56:00.964124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.874 [2024-12-09 14:56:00.964731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.964772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.874 [2024-12-09 14:56:00.964784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:23:22.874 [2024-12-09 14:56:00.964792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.874 [2024-12-09 14:56:00.964957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.964968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.874 [2024-12-09 14:56:00.964984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:23:22.874 [2024-12-09 14:56:00.964992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.874 [2024-12-09 14:56:00.980622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.874 [2024-12-09 14:56:00.980672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.874 [2024-12-09 14:56:00.980683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.611 ms 00:23:22.874 [2024-12-09 14:56:00.980691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:00.995173] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:23.136 [2024-12-09 14:56:00.995222] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:23.136 [2024-12-09 14:56:00.995236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:00.995245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:23.136 [2024-12-09 14:56:00.995254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.436 ms 00:23:23.136 [2024-12-09 14:56:00.995261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.021463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.021518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:23.136 [2024-12-09 14:56:01.021531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.147 ms 00:23:23.136 [2024-12-09 14:56:01.021538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.034501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.034549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:23.136 [2024-12-09 14:56:01.034561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.895 ms 00:23:23.136 [2024-12-09 14:56:01.034569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.047365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.047411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:23.136 [2024-12-09 14:56:01.047423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.749 ms 00:23:23.136 [2024-12-09 14:56:01.047430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.048087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.048113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:23.136 [2024-12-09 14:56:01.048126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:23:23.136 [2024-12-09 14:56:01.048134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.114355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.114422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:23.136 [2024-12-09 14:56:01.114445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.199 ms 00:23:23.136 [2024-12-09 14:56:01.114454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.125915] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:23.136 [2024-12-09 14:56:01.129173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.129216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:23.136 [2024-12-09 14:56:01.129228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.657 ms 00:23:23.136 [2024-12-09 14:56:01.129236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.129324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.129341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:23.136 [2024-12-09 14:56:01.129356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:23.136 [2024-12-09 14:56:01.129365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.129436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.129447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:23.136 [2024-12-09 14:56:01.129456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:23.136 [2024-12-09 14:56:01.129464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.129483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.129493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:23.136 [2024-12-09 14:56:01.129501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:23.136 [2024-12-09 14:56:01.129509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.129546] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:23.136 [2024-12-09 14:56:01.129557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.129565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:23.136 [2024-12-09 14:56:01.129573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:23.136 [2024-12-09 14:56:01.129583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.155757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.155818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:23.136 [2024-12-09 14:56:01.155838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.155 ms 00:23:23.136 [2024-12-09 14:56:01.155847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.155932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.136 [2024-12-09 14:56:01.155942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:23.136 [2024-12-09 14:56:01.155951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:23.136 [2024-12-09 14:56:01.155960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.136 [2024-12-09 14:56:01.157341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.779 ms, result 0 00:23:24.525  [2024-12-09T14:56:03.591Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-09T14:56:04.534Z] Copying: 34/1024 [MB] (21 MBps) [2024-12-09T14:56:05.474Z] Copying: 54/1024 [MB] (19 MBps) [2024-12-09T14:56:06.418Z] Copying: 74/1024 [MB] (20 MBps) [2024-12-09T14:56:07.362Z] Copying: 94/1024 [MB] (20 MBps) [2024-12-09T14:56:08.746Z] Copying: 115/1024 [MB] (20 MBps) [2024-12-09T14:56:09.688Z] Copying: 135/1024 [MB] (19 MBps) [2024-12-09T14:56:10.631Z] Copying: 146/1024 [MB] (11 MBps) [2024-12-09T14:56:11.577Z] Copying: 158/1024 [MB] (12 MBps) [2024-12-09T14:56:12.520Z] Copying: 169/1024 [MB] (10 MBps) [2024-12-09T14:56:13.466Z] Copying: 182/1024 [MB] (13 MBps) [2024-12-09T14:56:14.411Z] Copying: 195/1024 [MB] (12 MBps) [2024-12-09T14:56:15.352Z] Copying: 209/1024 [MB] (13 MBps) [2024-12-09T14:56:16.730Z] Copying: 228/1024 [MB] (19 MBps) [2024-12-09T14:56:17.678Z] Copying: 259/1024 [MB] (31 MBps) [2024-12-09T14:56:18.621Z] Copying: 276/1024 [MB] (16 MBps) [2024-12-09T14:56:19.561Z] Copying: 290/1024 [MB] (14 MBps) [2024-12-09T14:56:20.505Z] Copying: 311/1024 [MB] (21 MBps) [2024-12-09T14:56:21.447Z] Copying: 325/1024 [MB] (14 MBps) [2024-12-09T14:56:22.392Z] Copying: 339/1024 [MB] (14 MBps) [2024-12-09T14:56:23.781Z] Copying: 355/1024 [MB] (16 MBps) [2024-12-09T14:56:24.356Z] Copying: 366/1024 [MB] (10 MBps) [2024-12-09T14:56:25.749Z] Copying: 381/1024 [MB] (15 MBps) [2024-12-09T14:56:26.691Z] Copying: 393/1024 [MB] (12 MBps) [2024-12-09T14:56:27.702Z] Copying: 414/1024 [MB] (20 MBps) [2024-12-09T14:56:28.708Z] Copying: 438/1024 [MB] (24 MBps) [2024-12-09T14:56:29.650Z] Copying: 456/1024 [MB] (18 MBps) [2024-12-09T14:56:30.592Z] Copying: 468/1024 [MB] (12 MBps) [2024-12-09T14:56:31.536Z] Copying: 479/1024 [MB] (11 MBps) [2024-12-09T14:56:32.481Z] Copying: 495/1024 [MB] (16 MBps) [2024-12-09T14:56:33.427Z] Copying: 506/1024 [MB] (10 MBps) [2024-12-09T14:56:34.373Z] Copying: 516/1024 [MB] (10 MBps) [2024-12-09T14:56:35.756Z] Copying: 526/1024 [MB] (10 MBps) [2024-12-09T14:56:36.692Z] Copying: 542/1024 [MB] (15 MBps) [2024-12-09T14:56:37.635Z] Copying: 583/1024 [MB] (41 MBps) [2024-12-09T14:56:38.579Z] Copying: 608/1024 [MB] (24 MBps) [2024-12-09T14:56:39.524Z] Copying: 622/1024 [MB] (14 MBps) [2024-12-09T14:56:40.468Z] Copying: 642/1024 [MB] (20 MBps) [2024-12-09T14:56:41.414Z] Copying: 659/1024 [MB] (17 MBps) [2024-12-09T14:56:42.355Z] Copying: 673/1024 [MB] (13 MBps) [2024-12-09T14:56:43.742Z] Copying: 695/1024 [MB] (21 MBps) [2024-12-09T14:56:44.685Z] Copying: 714/1024 [MB] (18 MBps) [2024-12-09T14:56:45.629Z] Copying: 727/1024 [MB] (13 MBps) [2024-12-09T14:56:46.574Z] Copying: 741/1024 [MB] (13 MBps) [2024-12-09T14:56:47.514Z] Copying: 760/1024 [MB] (18 MBps) [2024-12-09T14:56:48.460Z] Copying: 792/1024 [MB] (32 MBps) [2024-12-09T14:56:49.406Z] Copying: 806/1024 [MB] (14 MBps) [2024-12-09T14:56:50.353Z] Copying: 826/1024 [MB] (19 MBps) [2024-12-09T14:56:51.741Z] Copying: 848/1024 [MB] (22 MBps) [2024-12-09T14:56:52.686Z] Copying: 859/1024 [MB] (11 MBps) [2024-12-09T14:56:53.631Z] Copying: 875/1024 [MB] (15 MBps) [2024-12-09T14:56:54.575Z] Copying: 895/1024 [MB] (19 MBps) [2024-12-09T14:56:55.526Z] Copying: 908/1024 [MB] (13 MBps) [2024-12-09T14:56:56.556Z] Copying: 928/1024 [MB] (19 MBps) [2024-12-09T14:56:57.502Z] Copying: 943/1024 [MB] (15 MBps) [2024-12-09T14:56:58.448Z] Copying: 958/1024 [MB] (15 MBps) [2024-12-09T14:56:59.394Z] Copying: 978/1024 [MB] (19 MBps) [2024-12-09T14:57:00.782Z] Copying: 993/1024 [MB] (15 MBps) [2024-12-09T14:57:01.353Z] Copying: 1009/1024 [MB] (15 MBps) [2024-12-09T14:57:01.614Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-09 14:57:01.595442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.492 [2024-12-09 14:57:01.595552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:23.492 [2024-12-09 14:57:01.595574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:23.492 [2024-12-09 14:57:01.595587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.492 [2024-12-09 14:57:01.595620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:23.492 [2024-12-09 14:57:01.600466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.492 [2024-12-09 14:57:01.600523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:23.492 [2024-12-09 14:57:01.600534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:24:23.492 [2024-12-09 14:57:01.600545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.492 [2024-12-09 14:57:01.600790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.492 [2024-12-09 14:57:01.600814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:23.492 [2024-12-09 14:57:01.600825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:24:23.492 [2024-12-09 14:57:01.600833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.492 [2024-12-09 14:57:01.604551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.492 [2024-12-09 14:57:01.604586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:23.492 [2024-12-09 14:57:01.604597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:24:23.492 [2024-12-09 14:57:01.604611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.492 [2024-12-09 14:57:01.610871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.492 [2024-12-09 14:57:01.610913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:23.492 [2024-12-09 14:57:01.610925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.240 ms 00:24:23.492 [2024-12-09 14:57:01.610961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.638296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.755 [2024-12-09 14:57:01.638350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:23.755 [2024-12-09 14:57:01.638363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.260 ms 00:24:23.755 [2024-12-09 14:57:01.638371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.655432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.755 [2024-12-09 14:57:01.655483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.755 [2024-12-09 14:57:01.655497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.009 ms 00:24:23.755 [2024-12-09 14:57:01.655506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.655671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.755 [2024-12-09 14:57:01.655685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:23.755 [2024-12-09 14:57:01.655696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:23.755 [2024-12-09 14:57:01.655705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.681560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.755 [2024-12-09 14:57:01.681612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:23.755 [2024-12-09 14:57:01.681625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.838 ms 00:24:23.755 [2024-12-09 14:57:01.681632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.707520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.755 [2024-12-09 14:57:01.707571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:23.755 [2024-12-09 14:57:01.707583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.837 ms 00:24:23.755 [2024-12-09 14:57:01.707591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.732924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.755 [2024-12-09 14:57:01.732974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:23.755 [2024-12-09 14:57:01.732986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.285 ms 00:24:23.755 [2024-12-09 14:57:01.732993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.758023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.755 [2024-12-09 14:57:01.758070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:23.755 [2024-12-09 14:57:01.758082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.939 ms 00:24:23.755 [2024-12-09 14:57:01.758089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.755 [2024-12-09 14:57:01.758137] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:23.755 [2024-12-09 14:57:01.758161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:23.755 [2024-12-09 14:57:01.758573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:23.756 [2024-12-09 14:57:01.758999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:23.756 [2024-12-09 14:57:01.759008] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d275a0e-c7d7-4199-8bd3-cc8b877c7a19 00:24:23.756 [2024-12-09 14:57:01.759017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:23.756 [2024-12-09 14:57:01.759024] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:23.756 [2024-12-09 14:57:01.759032] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:23.756 [2024-12-09 14:57:01.759041] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:23.756 [2024-12-09 14:57:01.759056] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:23.756 [2024-12-09 14:57:01.759065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:23.756 [2024-12-09 14:57:01.759073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:23.756 [2024-12-09 14:57:01.759080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:23.756 [2024-12-09 14:57:01.759087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:23.756 [2024-12-09 14:57:01.759095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.756 [2024-12-09 14:57:01.759104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:23.756 [2024-12-09 14:57:01.759113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:24:23.756 [2024-12-09 14:57:01.759123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.756 [2024-12-09 14:57:01.772722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.756 [2024-12-09 14:57:01.772766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:23.756 [2024-12-09 14:57:01.772777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.578 ms 00:24:23.756 [2024-12-09 14:57:01.772785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.756 [2024-12-09 14:57:01.773220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.756 [2024-12-09 14:57:01.773245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:23.756 [2024-12-09 14:57:01.773263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:24:23.756 [2024-12-09 14:57:01.773271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.756 [2024-12-09 14:57:01.810092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.756 [2024-12-09 14:57:01.810144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.756 [2024-12-09 14:57:01.810156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.756 [2024-12-09 14:57:01.810166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.756 [2024-12-09 14:57:01.810233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.756 [2024-12-09 14:57:01.810243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.756 [2024-12-09 14:57:01.810257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.756 [2024-12-09 14:57:01.810266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.756 [2024-12-09 14:57:01.810354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.756 [2024-12-09 14:57:01.810366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.756 [2024-12-09 14:57:01.810375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.756 [2024-12-09 14:57:01.810384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.756 [2024-12-09 14:57:01.810402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.756 [2024-12-09 14:57:01.810412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.756 [2024-12-09 14:57:01.810422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.756 [2024-12-09 14:57:01.810434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.897622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.897676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.018 [2024-12-09 14:57:01.897690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.897698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.967527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.967587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.018 [2024-12-09 14:57:01.967606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.967615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.967676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.967687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.018 [2024-12-09 14:57:01.967696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.967705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.967766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.967778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.018 [2024-12-09 14:57:01.967789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.967798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.967934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.967945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.018 [2024-12-09 14:57:01.967954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.967962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.967994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.968004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.018 [2024-12-09 14:57:01.968012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.968020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.968065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.968076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.018 [2024-12-09 14:57:01.968084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.968093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.968141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.018 [2024-12-09 14:57:01.968159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.018 [2024-12-09 14:57:01.968168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.018 [2024-12-09 14:57:01.968176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.018 [2024-12-09 14:57:01.968314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.843 ms, result 0 00:24:24.960 00:24:24.960 00:24:24.960 14:57:02 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:26.873 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:26.873 14:57:04 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:26.873 [2024-12-09 14:57:04.820397] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:24:26.873 [2024-12-09 14:57:04.820523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80100 ] 00:24:26.873 [2024-12-09 14:57:04.981233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.135 [2024-12-09 14:57:05.103904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.397 [2024-12-09 14:57:05.402926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.397 [2024-12-09 14:57:05.403024] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.659 [2024-12-09 14:57:05.564496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.564565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:27.659 [2024-12-09 14:57:05.564581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:27.659 [2024-12-09 14:57:05.564589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.564647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.564660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.659 [2024-12-09 14:57:05.564670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:27.659 [2024-12-09 14:57:05.564678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.564699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:27.659 [2024-12-09 14:57:05.565543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:27.659 [2024-12-09 14:57:05.565589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.565598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.659 [2024-12-09 14:57:05.565608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:24:27.659 [2024-12-09 14:57:05.565616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.567409] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:27.659 [2024-12-09 14:57:05.581653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.581703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:27.659 [2024-12-09 14:57:05.581716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.245 ms 00:24:27.659 [2024-12-09 14:57:05.581724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.581824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.581835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:27.659 [2024-12-09 14:57:05.581846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:27.659 [2024-12-09 14:57:05.581854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.590209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.590255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.659 [2024-12-09 14:57:05.590267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.272 ms 00:24:27.659 [2024-12-09 14:57:05.590281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.590364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.590374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.659 [2024-12-09 14:57:05.590383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:27.659 [2024-12-09 14:57:05.590392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.590437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.590447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:27.659 [2024-12-09 14:57:05.590458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:27.659 [2024-12-09 14:57:05.590466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.590493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.659 [2024-12-09 14:57:05.594599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.594639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.659 [2024-12-09 14:57:05.594653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.112 ms 00:24:27.659 [2024-12-09 14:57:05.594661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.594702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.594711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:27.659 [2024-12-09 14:57:05.594720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:27.659 [2024-12-09 14:57:05.594728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.594782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:27.659 [2024-12-09 14:57:05.594822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:27.659 [2024-12-09 14:57:05.594860] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:27.659 [2024-12-09 14:57:05.594881] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:27.659 [2024-12-09 14:57:05.595001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:27.659 [2024-12-09 14:57:05.595011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:27.659 [2024-12-09 14:57:05.595023] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:27.659 [2024-12-09 14:57:05.595034] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:27.659 [2024-12-09 14:57:05.595043] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:27.659 [2024-12-09 14:57:05.595052] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:27.659 [2024-12-09 14:57:05.595060] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:27.659 [2024-12-09 14:57:05.595070] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:27.659 [2024-12-09 14:57:05.595078] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:27.659 [2024-12-09 14:57:05.595085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.595093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:27.659 [2024-12-09 14:57:05.595101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:24:27.659 [2024-12-09 14:57:05.595108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.595193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.659 [2024-12-09 14:57:05.595202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:27.659 [2024-12-09 14:57:05.595209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:27.659 [2024-12-09 14:57:05.595217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.659 [2024-12-09 14:57:05.595327] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:27.660 [2024-12-09 14:57:05.595338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:27.660 [2024-12-09 14:57:05.595346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:27.660 [2024-12-09 14:57:05.595372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:27.660 [2024-12-09 14:57:05.595396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.660 [2024-12-09 14:57:05.595410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:27.660 [2024-12-09 14:57:05.595418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:27.660 [2024-12-09 14:57:05.595425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.660 [2024-12-09 14:57:05.595439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:27.660 [2024-12-09 14:57:05.595447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:27.660 [2024-12-09 14:57:05.595454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:27.660 [2024-12-09 14:57:05.595468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:27.660 [2024-12-09 14:57:05.595490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:27.660 [2024-12-09 14:57:05.595510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:27.660 [2024-12-09 14:57:05.595531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:27.660 [2024-12-09 14:57:05.595552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:27.660 [2024-12-09 14:57:05.595572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.660 [2024-12-09 14:57:05.595585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:27.660 [2024-12-09 14:57:05.595592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:27.660 [2024-12-09 14:57:05.595598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.660 [2024-12-09 14:57:05.595607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:27.660 [2024-12-09 14:57:05.595614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:27.660 [2024-12-09 14:57:05.595620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:27.660 [2024-12-09 14:57:05.595634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:27.660 [2024-12-09 14:57:05.595640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595646] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:27.660 [2024-12-09 14:57:05.595654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:27.660 [2024-12-09 14:57:05.595662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.660 [2024-12-09 14:57:05.595678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:27.660 [2024-12-09 14:57:05.595685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:27.660 [2024-12-09 14:57:05.595693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:27.660 [2024-12-09 14:57:05.595700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:27.660 [2024-12-09 14:57:05.595706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:27.660 [2024-12-09 14:57:05.595713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:27.660 [2024-12-09 14:57:05.595721] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:27.660 [2024-12-09 14:57:05.595730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.660 [2024-12-09 14:57:05.595742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:27.660 [2024-12-09 14:57:05.595750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:27.660 [2024-12-09 14:57:05.595757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:27.660 [2024-12-09 14:57:05.595765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:27.660 [2024-12-09 14:57:05.595772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:27.660 [2024-12-09 14:57:05.595780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:27.660 [2024-12-09 14:57:05.595788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:27.660 [2024-12-09 14:57:05.595795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:27.660 [2024-12-09 14:57:05.595816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:27.660 [2024-12-09 14:57:05.595823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:27.660 [2024-12-09 14:57:05.595829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:27.660 [2024-12-09 14:57:05.595836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:27.660 [2024-12-09 14:57:05.595843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:27.660 [2024-12-09 14:57:05.595850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:27.660 [2024-12-09 14:57:05.595858] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:27.660 [2024-12-09 14:57:05.595867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.660 [2024-12-09 14:57:05.595875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:27.660 [2024-12-09 14:57:05.595882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:27.660 [2024-12-09 14:57:05.595890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:27.660 [2024-12-09 14:57:05.595897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:27.660 [2024-12-09 14:57:05.595904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.660 [2024-12-09 14:57:05.595912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:27.660 [2024-12-09 14:57:05.595921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:24:27.660 [2024-12-09 14:57:05.595928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.660 [2024-12-09 14:57:05.628153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.660 [2024-12-09 14:57:05.628204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.660 [2024-12-09 14:57:05.628215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.177 ms 00:24:27.660 [2024-12-09 14:57:05.628227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.660 [2024-12-09 14:57:05.628322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.660 [2024-12-09 14:57:05.628332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:27.660 [2024-12-09 14:57:05.628340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:27.660 [2024-12-09 14:57:05.628348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.660 [2024-12-09 14:57:05.671950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.660 [2024-12-09 14:57:05.672007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.660 [2024-12-09 14:57:05.672020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.537 ms 00:24:27.660 [2024-12-09 14:57:05.672029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.660 [2024-12-09 14:57:05.672079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.660 [2024-12-09 14:57:05.672090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.660 [2024-12-09 14:57:05.672104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:27.660 [2024-12-09 14:57:05.672113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.660 [2024-12-09 14:57:05.672724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.660 [2024-12-09 14:57:05.672764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.660 [2024-12-09 14:57:05.672775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:24:27.660 [2024-12-09 14:57:05.672783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.660 [2024-12-09 14:57:05.672963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.660 [2024-12-09 14:57:05.672975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.661 [2024-12-09 14:57:05.672992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:24:27.661 [2024-12-09 14:57:05.673000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.661 [2024-12-09 14:57:05.688976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.661 [2024-12-09 14:57:05.689026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.661 [2024-12-09 14:57:05.689037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.955 ms 00:24:27.661 [2024-12-09 14:57:05.689044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.661 [2024-12-09 14:57:05.703310] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:27.661 [2024-12-09 14:57:05.703363] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:27.661 [2024-12-09 14:57:05.703378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.661 [2024-12-09 14:57:05.703387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:27.661 [2024-12-09 14:57:05.703399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.221 ms 00:24:27.661 [2024-12-09 14:57:05.703406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.661 [2024-12-09 14:57:05.729725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.661 [2024-12-09 14:57:05.729778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:27.661 [2024-12-09 14:57:05.729791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.261 ms 00:24:27.661 [2024-12-09 14:57:05.729808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.661 [2024-12-09 14:57:05.742979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.661 [2024-12-09 14:57:05.743034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:27.661 [2024-12-09 14:57:05.743046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.102 ms 00:24:27.661 [2024-12-09 14:57:05.743055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.661 [2024-12-09 14:57:05.755783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.661 [2024-12-09 14:57:05.755838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:27.661 [2024-12-09 14:57:05.755851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.678 ms 00:24:27.661 [2024-12-09 14:57:05.755858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.661 [2024-12-09 14:57:05.756500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.661 [2024-12-09 14:57:05.756527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:27.661 [2024-12-09 14:57:05.756540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:24:27.661 [2024-12-09 14:57:05.756548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.922 [2024-12-09 14:57:05.822487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.922 [2024-12-09 14:57:05.822563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:27.922 [2024-12-09 14:57:05.822588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.918 ms 00:24:27.922 [2024-12-09 14:57:05.822598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.922 [2024-12-09 14:57:05.834168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:27.922 [2024-12-09 14:57:05.837587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.922 [2024-12-09 14:57:05.837632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:27.922 [2024-12-09 14:57:05.837644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.923 ms 00:24:27.922 [2024-12-09 14:57:05.837653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.922 [2024-12-09 14:57:05.837746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.922 [2024-12-09 14:57:05.837758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:27.922 [2024-12-09 14:57:05.837772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:27.922 [2024-12-09 14:57:05.837781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.922 [2024-12-09 14:57:05.837873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.922 [2024-12-09 14:57:05.837886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:27.922 [2024-12-09 14:57:05.837895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:27.922 [2024-12-09 14:57:05.837904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.922 [2024-12-09 14:57:05.837928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.922 [2024-12-09 14:57:05.837938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:27.922 [2024-12-09 14:57:05.837947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:27.923 [2024-12-09 14:57:05.837956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.923 [2024-12-09 14:57:05.837993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:27.923 [2024-12-09 14:57:05.838003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.923 [2024-12-09 14:57:05.838012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:27.923 [2024-12-09 14:57:05.838021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:27.923 [2024-12-09 14:57:05.838030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.923 [2024-12-09 14:57:05.864460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.923 [2024-12-09 14:57:05.864510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:27.923 [2024-12-09 14:57:05.864530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.408 ms 00:24:27.923 [2024-12-09 14:57:05.864539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.923 [2024-12-09 14:57:05.864626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.923 [2024-12-09 14:57:05.864637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:27.923 [2024-12-09 14:57:05.864646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:27.923 [2024-12-09 14:57:05.864654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.923 [2024-12-09 14:57:05.866336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.332 ms, result 0 00:24:28.867  [2024-12-09T14:57:07.934Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-09T14:57:08.881Z] Copying: 27/1024 [MB] (16 MBps) [2024-12-09T14:57:10.269Z] Copying: 44/1024 [MB] (16 MBps) [2024-12-09T14:57:11.212Z] Copying: 63/1024 [MB] (19 MBps) [2024-12-09T14:57:12.157Z] Copying: 74/1024 [MB] (11 MBps) [2024-12-09T14:57:13.105Z] Copying: 88/1024 [MB] (13 MBps) [2024-12-09T14:57:14.039Z] Copying: 100/1024 [MB] (12 MBps) [2024-12-09T14:57:14.978Z] Copying: 132/1024 [MB] (31 MBps) [2024-12-09T14:57:15.916Z] Copying: 166/1024 [MB] (33 MBps) [2024-12-09T14:57:17.291Z] Copying: 194/1024 [MB] (27 MBps) [2024-12-09T14:57:18.235Z] Copying: 242/1024 [MB] (47 MBps) [2024-12-09T14:57:19.169Z] Copying: 270/1024 [MB] (27 MBps) [2024-12-09T14:57:20.108Z] Copying: 306/1024 [MB] (36 MBps) [2024-12-09T14:57:21.049Z] Copying: 340/1024 [MB] (34 MBps) [2024-12-09T14:57:21.982Z] Copying: 360/1024 [MB] (20 MBps) [2024-12-09T14:57:22.919Z] Copying: 395/1024 [MB] (34 MBps) [2024-12-09T14:57:24.306Z] Copying: 430/1024 [MB] (35 MBps) [2024-12-09T14:57:25.242Z] Copying: 449/1024 [MB] (19 MBps) [2024-12-09T14:57:26.182Z] Copying: 468/1024 [MB] (18 MBps) [2024-12-09T14:57:27.123Z] Copying: 486/1024 [MB] (18 MBps) [2024-12-09T14:57:28.065Z] Copying: 504/1024 [MB] (17 MBps) [2024-12-09T14:57:29.005Z] Copying: 524/1024 [MB] (20 MBps) [2024-12-09T14:57:29.947Z] Copying: 545/1024 [MB] (20 MBps) [2024-12-09T14:57:30.888Z] Copying: 563/1024 [MB] (18 MBps) [2024-12-09T14:57:32.273Z] Copying: 573/1024 [MB] (10 MBps) [2024-12-09T14:57:32.918Z] Copying: 584/1024 [MB] (10 MBps) [2024-12-09T14:57:34.302Z] Copying: 600/1024 [MB] (16 MBps) [2024-12-09T14:57:35.248Z] Copying: 611/1024 [MB] (10 MBps) [2024-12-09T14:57:36.193Z] Copying: 621/1024 [MB] (10 MBps) [2024-12-09T14:57:37.137Z] Copying: 631/1024 [MB] (10 MBps) [2024-12-09T14:57:38.079Z] Copying: 648/1024 [MB] (16 MBps) [2024-12-09T14:57:39.022Z] Copying: 670/1024 [MB] (22 MBps) [2024-12-09T14:57:39.963Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-09T14:57:40.904Z] Copying: 691/1024 [MB] (10 MBps) [2024-12-09T14:57:42.286Z] Copying: 701/1024 [MB] (10 MBps) [2024-12-09T14:57:43.227Z] Copying: 711/1024 [MB] (10 MBps) [2024-12-09T14:57:44.170Z] Copying: 721/1024 [MB] (10 MBps) [2024-12-09T14:57:45.112Z] Copying: 731/1024 [MB] (10 MBps) [2024-12-09T14:57:46.056Z] Copying: 759656/1048576 [kB] (10116 kBps) [2024-12-09T14:57:47.001Z] Copying: 769788/1048576 [kB] (10132 kBps) [2024-12-09T14:57:47.946Z] Copying: 779872/1048576 [kB] (10084 kBps) [2024-12-09T14:57:48.892Z] Copying: 790052/1048576 [kB] (10180 kBps) [2024-12-09T14:57:50.279Z] Copying: 800264/1048576 [kB] (10212 kBps) [2024-12-09T14:57:51.223Z] Copying: 791/1024 [MB] (10 MBps) [2024-12-09T14:57:52.164Z] Copying: 802/1024 [MB] (10 MBps) [2024-12-09T14:57:53.104Z] Copying: 831400/1048576 [kB] (10072 kBps) [2024-12-09T14:57:54.045Z] Copying: 841624/1048576 [kB] (10224 kBps) [2024-12-09T14:57:54.986Z] Copying: 831/1024 [MB] (10 MBps) [2024-12-09T14:57:55.927Z] Copying: 842/1024 [MB] (10 MBps) [2024-12-09T14:57:57.313Z] Copying: 852/1024 [MB] (10 MBps) [2024-12-09T14:57:57.885Z] Copying: 863/1024 [MB] (10 MBps) [2024-12-09T14:57:59.272Z] Copying: 873/1024 [MB] (10 MBps) [2024-12-09T14:58:00.218Z] Copying: 884/1024 [MB] (10 MBps) [2024-12-09T14:58:01.155Z] Copying: 894/1024 [MB] (10 MBps) [2024-12-09T14:58:02.088Z] Copying: 915/1024 [MB] (20 MBps) [2024-12-09T14:58:03.021Z] Copying: 962/1024 [MB] (47 MBps) [2024-12-09T14:58:03.960Z] Copying: 1010/1024 [MB] (48 MBps) [2024-12-09T14:58:04.222Z] Copying: 1023/1024 [MB] (13 MBps) [2024-12-09T14:58:04.222Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-09 14:58:04.068051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.100 [2024-12-09 14:58:04.068136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:26.100 [2024-12-09 14:58:04.068166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:26.100 [2024-12-09 14:58:04.068176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.100 [2024-12-09 14:58:04.068532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.100 [2024-12-09 14:58:04.074920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.100 [2024-12-09 14:58:04.074999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:26.100 [2024-12-09 14:58:04.075012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.364 ms 00:25:26.100 [2024-12-09 14:58:04.075021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.100 [2024-12-09 14:58:04.086764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.100 [2024-12-09 14:58:04.086829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:26.100 [2024-12-09 14:58:04.086844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.152 ms 00:25:26.100 [2024-12-09 14:58:04.086861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.100 [2024-12-09 14:58:04.112550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.100 [2024-12-09 14:58:04.112606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:26.100 [2024-12-09 14:58:04.112618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.667 ms 00:25:26.100 [2024-12-09 14:58:04.112627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.100 [2024-12-09 14:58:04.118755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.100 [2024-12-09 14:58:04.118799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:26.100 [2024-12-09 14:58:04.118821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:25:26.100 [2024-12-09 14:58:04.118840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.100 [2024-12-09 14:58:04.146534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.100 [2024-12-09 14:58:04.146584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:26.100 [2024-12-09 14:58:04.146597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.630 ms 00:25:26.100 [2024-12-09 14:58:04.146606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.100 [2024-12-09 14:58:04.163367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.100 [2024-12-09 14:58:04.163417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:26.100 [2024-12-09 14:58:04.163429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.710 ms 00:25:26.100 [2024-12-09 14:58:04.163437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.361 [2024-12-09 14:58:04.464991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.361 [2024-12-09 14:58:04.465063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:26.361 [2024-12-09 14:58:04.465076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 301.494 ms 00:25:26.361 [2024-12-09 14:58:04.465085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.624 [2024-12-09 14:58:04.492316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.624 [2024-12-09 14:58:04.492367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:26.624 [2024-12-09 14:58:04.492380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.214 ms 00:25:26.624 [2024-12-09 14:58:04.492388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.624 [2024-12-09 14:58:04.518840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.624 [2024-12-09 14:58:04.518888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:26.624 [2024-12-09 14:58:04.518900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.403 ms 00:25:26.624 [2024-12-09 14:58:04.518908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.624 [2024-12-09 14:58:04.544450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.624 [2024-12-09 14:58:04.544501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:26.624 [2024-12-09 14:58:04.544515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.471 ms 00:25:26.624 [2024-12-09 14:58:04.544522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.624 [2024-12-09 14:58:04.570429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.624 [2024-12-09 14:58:04.570479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:26.624 [2024-12-09 14:58:04.570492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.810 ms 00:25:26.624 [2024-12-09 14:58:04.570501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.624 [2024-12-09 14:58:04.570549] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:26.624 [2024-12-09 14:58:04.570565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105472 / 261120 wr_cnt: 1 state: open 00:25:26.624 [2024-12-09 14:58:04.570577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:26.624 [2024-12-09 14:58:04.570901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.570998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:26.625 [2024-12-09 14:58:04.571425] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:26.625 [2024-12-09 14:58:04.571434] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d275a0e-c7d7-4199-8bd3-cc8b877c7a19 00:25:26.625 [2024-12-09 14:58:04.571442] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105472 00:25:26.625 [2024-12-09 14:58:04.571451] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106432 00:25:26.625 [2024-12-09 14:58:04.571459] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105472 00:25:26.625 [2024-12-09 14:58:04.571469] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:25:26.625 [2024-12-09 14:58:04.571489] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:26.625 [2024-12-09 14:58:04.571497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:26.625 [2024-12-09 14:58:04.571506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:26.625 [2024-12-09 14:58:04.571513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:26.625 [2024-12-09 14:58:04.571520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:26.625 [2024-12-09 14:58:04.571528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.625 [2024-12-09 14:58:04.571536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:26.625 [2024-12-09 14:58:04.571544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:25:26.625 [2024-12-09 14:58:04.571552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.625 [2024-12-09 14:58:04.585705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.625 [2024-12-09 14:58:04.585749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:26.625 [2024-12-09 14:58:04.585768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.117 ms 00:25:26.625 [2024-12-09 14:58:04.585777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.625 [2024-12-09 14:58:04.586183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.625 [2024-12-09 14:58:04.586201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:26.625 [2024-12-09 14:58:04.586210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:25:26.625 [2024-12-09 14:58:04.586218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.625 [2024-12-09 14:58:04.623090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.625 [2024-12-09 14:58:04.623141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.625 [2024-12-09 14:58:04.623154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.625 [2024-12-09 14:58:04.623163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.625 [2024-12-09 14:58:04.623236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.625 [2024-12-09 14:58:04.623247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.625 [2024-12-09 14:58:04.623256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.625 [2024-12-09 14:58:04.623265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.625 [2024-12-09 14:58:04.623354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.625 [2024-12-09 14:58:04.623369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.625 [2024-12-09 14:58:04.623378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.625 [2024-12-09 14:58:04.623386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.625 [2024-12-09 14:58:04.623402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.625 [2024-12-09 14:58:04.623411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.625 [2024-12-09 14:58:04.623419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.625 [2024-12-09 14:58:04.623427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.626 [2024-12-09 14:58:04.709608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.626 [2024-12-09 14:58:04.709675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.626 [2024-12-09 14:58:04.709688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.626 [2024-12-09 14:58:04.709698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.886 [2024-12-09 14:58:04.780084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.886 [2024-12-09 14:58:04.780142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.886 [2024-12-09 14:58:04.780155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.886 [2024-12-09 14:58:04.780165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.886 [2024-12-09 14:58:04.780222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.886 [2024-12-09 14:58:04.780232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.886 [2024-12-09 14:58:04.780241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.886 [2024-12-09 14:58:04.780257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.886 [2024-12-09 14:58:04.780322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.886 [2024-12-09 14:58:04.780334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.886 [2024-12-09 14:58:04.780343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.886 [2024-12-09 14:58:04.780352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.886 [2024-12-09 14:58:04.780455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.886 [2024-12-09 14:58:04.780467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.886 [2024-12-09 14:58:04.780475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.886 [2024-12-09 14:58:04.780487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.887 [2024-12-09 14:58:04.780519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.887 [2024-12-09 14:58:04.780530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:26.887 [2024-12-09 14:58:04.780538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.887 [2024-12-09 14:58:04.780546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.887 [2024-12-09 14:58:04.780588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.887 [2024-12-09 14:58:04.780597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.887 [2024-12-09 14:58:04.780606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.887 [2024-12-09 14:58:04.780615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.887 [2024-12-09 14:58:04.780664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.887 [2024-12-09 14:58:04.780684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.887 [2024-12-09 14:58:04.780692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.887 [2024-12-09 14:58:04.780701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.887 [2024-12-09 14:58:04.780863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 714.142 ms, result 0 00:25:28.308 00:25:28.308 00:25:28.308 14:58:06 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:28.308 [2024-12-09 14:58:06.382169] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:25:28.308 [2024-12-09 14:58:06.382320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80724 ] 00:25:28.593 [2024-12-09 14:58:06.548435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.593 [2024-12-09 14:58:06.673035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.854 [2024-12-09 14:58:06.974525] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.854 [2024-12-09 14:58:06.974619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:29.117 [2024-12-09 14:58:07.136120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.136188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:29.117 [2024-12-09 14:58:07.136203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:29.117 [2024-12-09 14:58:07.136212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.136269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.136284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.117 [2024-12-09 14:58:07.136293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:29.117 [2024-12-09 14:58:07.136301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.136324] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:29.117 [2024-12-09 14:58:07.137039] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:29.117 [2024-12-09 14:58:07.137069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.137077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.117 [2024-12-09 14:58:07.137086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:25:29.117 [2024-12-09 14:58:07.137094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.139014] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:29.117 [2024-12-09 14:58:07.153524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.153577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:29.117 [2024-12-09 14:58:07.153591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.513 ms 00:25:29.117 [2024-12-09 14:58:07.153600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.153689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.153700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:29.117 [2024-12-09 14:58:07.153709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:29.117 [2024-12-09 14:58:07.153717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.162130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.162174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.117 [2024-12-09 14:58:07.162185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.328 ms 00:25:29.117 [2024-12-09 14:58:07.162199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.162281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.162291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.117 [2024-12-09 14:58:07.162300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:29.117 [2024-12-09 14:58:07.162308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.162354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.162365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:29.117 [2024-12-09 14:58:07.162374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:29.117 [2024-12-09 14:58:07.162381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.162409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:29.117 [2024-12-09 14:58:07.166435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.166473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.117 [2024-12-09 14:58:07.166487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.032 ms 00:25:29.117 [2024-12-09 14:58:07.166496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.166536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.166544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:29.117 [2024-12-09 14:58:07.166553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:29.117 [2024-12-09 14:58:07.166562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.166614] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:29.117 [2024-12-09 14:58:07.166640] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:29.117 [2024-12-09 14:58:07.166676] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:29.117 [2024-12-09 14:58:07.166696] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:29.117 [2024-12-09 14:58:07.166818] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:29.117 [2024-12-09 14:58:07.166831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:29.117 [2024-12-09 14:58:07.166843] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:29.117 [2024-12-09 14:58:07.166854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:29.117 [2024-12-09 14:58:07.166864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:29.117 [2024-12-09 14:58:07.166873] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:29.117 [2024-12-09 14:58:07.166881] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:29.117 [2024-12-09 14:58:07.166891] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:29.117 [2024-12-09 14:58:07.166898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:29.117 [2024-12-09 14:58:07.166907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.166915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:29.117 [2024-12-09 14:58:07.166924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:25:29.117 [2024-12-09 14:58:07.166932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.167040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.117 [2024-12-09 14:58:07.167049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:29.117 [2024-12-09 14:58:07.167057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:29.117 [2024-12-09 14:58:07.167064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.117 [2024-12-09 14:58:07.167174] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:29.117 [2024-12-09 14:58:07.167185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:29.117 [2024-12-09 14:58:07.167193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.117 [2024-12-09 14:58:07.167201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:29.117 [2024-12-09 14:58:07.167217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:29.117 [2024-12-09 14:58:07.167231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:29.117 [2024-12-09 14:58:07.167239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.117 [2024-12-09 14:58:07.167253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:29.117 [2024-12-09 14:58:07.167260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:29.117 [2024-12-09 14:58:07.167267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.117 [2024-12-09 14:58:07.167280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:29.117 [2024-12-09 14:58:07.167287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:29.117 [2024-12-09 14:58:07.167294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:29.117 [2024-12-09 14:58:07.167307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:29.117 [2024-12-09 14:58:07.167315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:29.117 [2024-12-09 14:58:07.167328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.117 [2024-12-09 14:58:07.167341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:29.117 [2024-12-09 14:58:07.167348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.117 [2024-12-09 14:58:07.167361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:29.117 [2024-12-09 14:58:07.167367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.117 [2024-12-09 14:58:07.167380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:29.117 [2024-12-09 14:58:07.167387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.117 [2024-12-09 14:58:07.167400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:29.117 [2024-12-09 14:58:07.167407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:29.117 [2024-12-09 14:58:07.167414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.118 [2024-12-09 14:58:07.167421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:29.118 [2024-12-09 14:58:07.167427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:29.118 [2024-12-09 14:58:07.167433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.118 [2024-12-09 14:58:07.167439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:29.118 [2024-12-09 14:58:07.167445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:29.118 [2024-12-09 14:58:07.167454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.118 [2024-12-09 14:58:07.167461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:29.118 [2024-12-09 14:58:07.167467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:29.118 [2024-12-09 14:58:07.167474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.118 [2024-12-09 14:58:07.167480] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:29.118 [2024-12-09 14:58:07.167488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:29.118 [2024-12-09 14:58:07.167496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.118 [2024-12-09 14:58:07.167503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.118 [2024-12-09 14:58:07.167511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:29.118 [2024-12-09 14:58:07.167518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:29.118 [2024-12-09 14:58:07.167527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:29.118 [2024-12-09 14:58:07.167534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:29.118 [2024-12-09 14:58:07.167540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:29.118 [2024-12-09 14:58:07.167547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:29.118 [2024-12-09 14:58:07.167555] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:29.118 [2024-12-09 14:58:07.167564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.118 [2024-12-09 14:58:07.167576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:29.118 [2024-12-09 14:58:07.167584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:29.118 [2024-12-09 14:58:07.167592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:29.118 [2024-12-09 14:58:07.167599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:29.118 [2024-12-09 14:58:07.167606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:29.118 [2024-12-09 14:58:07.167614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:29.118 [2024-12-09 14:58:07.167622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:29.118 [2024-12-09 14:58:07.167629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:29.118 [2024-12-09 14:58:07.167636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:29.118 [2024-12-09 14:58:07.167643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:29.118 [2024-12-09 14:58:07.167651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:29.118 [2024-12-09 14:58:07.167658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:29.118 [2024-12-09 14:58:07.167664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:29.118 [2024-12-09 14:58:07.167671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:29.118 [2024-12-09 14:58:07.167678] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:29.118 [2024-12-09 14:58:07.167686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.118 [2024-12-09 14:58:07.167695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:29.118 [2024-12-09 14:58:07.167702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:29.118 [2024-12-09 14:58:07.167710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:29.118 [2024-12-09 14:58:07.167717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:29.118 [2024-12-09 14:58:07.167724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.118 [2024-12-09 14:58:07.167731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:29.118 [2024-12-09 14:58:07.167739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:25:29.118 [2024-12-09 14:58:07.167747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.118 [2024-12-09 14:58:07.200341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.118 [2024-12-09 14:58:07.200397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.118 [2024-12-09 14:58:07.200410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.546 ms 00:25:29.118 [2024-12-09 14:58:07.200423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.118 [2024-12-09 14:58:07.200517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.118 [2024-12-09 14:58:07.200526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:29.118 [2024-12-09 14:58:07.200535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:29.118 [2024-12-09 14:58:07.200543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.249449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.249506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.380 [2024-12-09 14:58:07.249520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.840 ms 00:25:29.380 [2024-12-09 14:58:07.249529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.249580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.249591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.380 [2024-12-09 14:58:07.249604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:29.380 [2024-12-09 14:58:07.249612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.250270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.250310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.380 [2024-12-09 14:58:07.250321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:25:29.380 [2024-12-09 14:58:07.250330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.250487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.250498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.380 [2024-12-09 14:58:07.250513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:25:29.380 [2024-12-09 14:58:07.250521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.266406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.266457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.380 [2024-12-09 14:58:07.266469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.864 ms 00:25:29.380 [2024-12-09 14:58:07.266476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.281339] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:29.380 [2024-12-09 14:58:07.281390] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:29.380 [2024-12-09 14:58:07.281405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.281413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:29.380 [2024-12-09 14:58:07.281422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.813 ms 00:25:29.380 [2024-12-09 14:58:07.281429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.308017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.308069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:29.380 [2024-12-09 14:58:07.308082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.529 ms 00:25:29.380 [2024-12-09 14:58:07.308091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.321419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.321470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:29.380 [2024-12-09 14:58:07.321483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.262 ms 00:25:29.380 [2024-12-09 14:58:07.321491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.334732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.334784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:29.380 [2024-12-09 14:58:07.334796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.188 ms 00:25:29.380 [2024-12-09 14:58:07.334814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.335490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.335519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:29.380 [2024-12-09 14:58:07.335533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:25:29.380 [2024-12-09 14:58:07.335541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.402933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.403011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:29.380 [2024-12-09 14:58:07.403035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.370 ms 00:25:29.380 [2024-12-09 14:58:07.403045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.414791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:29.380 [2024-12-09 14:58:07.418257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.418303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:29.380 [2024-12-09 14:58:07.418316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.145 ms 00:25:29.380 [2024-12-09 14:58:07.418324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.418419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.418431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:29.380 [2024-12-09 14:58:07.418444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:29.380 [2024-12-09 14:58:07.418453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.420288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.420343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:29.380 [2024-12-09 14:58:07.420355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.794 ms 00:25:29.380 [2024-12-09 14:58:07.420363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.420396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.420405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:29.380 [2024-12-09 14:58:07.420416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:29.380 [2024-12-09 14:58:07.420424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.420471] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:29.380 [2024-12-09 14:58:07.420483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.420492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:29.380 [2024-12-09 14:58:07.420500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:29.380 [2024-12-09 14:58:07.420508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.447408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.447461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:29.380 [2024-12-09 14:58:07.447480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.879 ms 00:25:29.380 [2024-12-09 14:58:07.447488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.447577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.380 [2024-12-09 14:58:07.447587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:29.380 [2024-12-09 14:58:07.447597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:29.380 [2024-12-09 14:58:07.447605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.380 [2024-12-09 14:58:07.448922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.283 ms, result 0 00:25:30.772  [2024-12-09T14:58:09.839Z] Copying: 9872/1048576 [kB] (9872 kBps) [2024-12-09T14:58:10.784Z] Copying: 23/1024 [MB] (13 MBps) [2024-12-09T14:58:11.727Z] Copying: 39/1024 [MB] (16 MBps) [2024-12-09T14:58:12.670Z] Copying: 60/1024 [MB] (21 MBps) [2024-12-09T14:58:14.057Z] Copying: 83/1024 [MB] (22 MBps) [2024-12-09T14:58:14.998Z] Copying: 100/1024 [MB] (16 MBps) [2024-12-09T14:58:15.950Z] Copying: 120/1024 [MB] (19 MBps) [2024-12-09T14:58:16.892Z] Copying: 141/1024 [MB] (20 MBps) [2024-12-09T14:58:17.838Z] Copying: 155/1024 [MB] (14 MBps) [2024-12-09T14:58:18.783Z] Copying: 168/1024 [MB] (12 MBps) [2024-12-09T14:58:19.727Z] Copying: 186/1024 [MB] (18 MBps) [2024-12-09T14:58:20.673Z] Copying: 203/1024 [MB] (17 MBps) [2024-12-09T14:58:22.056Z] Copying: 217/1024 [MB] (14 MBps) [2024-12-09T14:58:22.997Z] Copying: 235/1024 [MB] (18 MBps) [2024-12-09T14:58:23.942Z] Copying: 258/1024 [MB] (23 MBps) [2024-12-09T14:58:24.887Z] Copying: 276/1024 [MB] (17 MBps) [2024-12-09T14:58:25.830Z] Copying: 287/1024 [MB] (10 MBps) [2024-12-09T14:58:26.776Z] Copying: 297/1024 [MB] (10 MBps) [2024-12-09T14:58:27.721Z] Copying: 309/1024 [MB] (11 MBps) [2024-12-09T14:58:28.666Z] Copying: 324/1024 [MB] (15 MBps) [2024-12-09T14:58:30.050Z] Copying: 344/1024 [MB] (19 MBps) [2024-12-09T14:58:30.995Z] Copying: 368/1024 [MB] (23 MBps) [2024-12-09T14:58:31.942Z] Copying: 385/1024 [MB] (17 MBps) [2024-12-09T14:58:32.887Z] Copying: 402/1024 [MB] (17 MBps) [2024-12-09T14:58:33.832Z] Copying: 414/1024 [MB] (12 MBps) [2024-12-09T14:58:34.776Z] Copying: 428/1024 [MB] (13 MBps) [2024-12-09T14:58:35.722Z] Copying: 441/1024 [MB] (13 MBps) [2024-12-09T14:58:36.665Z] Copying: 460/1024 [MB] (19 MBps) [2024-12-09T14:58:38.054Z] Copying: 481/1024 [MB] (21 MBps) [2024-12-09T14:58:38.998Z] Copying: 495/1024 [MB] (13 MBps) [2024-12-09T14:58:39.942Z] Copying: 513/1024 [MB] (17 MBps) [2024-12-09T14:58:40.887Z] Copying: 524/1024 [MB] (11 MBps) [2024-12-09T14:58:41.911Z] Copying: 534/1024 [MB] (10 MBps) [2024-12-09T14:58:42.855Z] Copying: 545/1024 [MB] (10 MBps) [2024-12-09T14:58:43.800Z] Copying: 561/1024 [MB] (15 MBps) [2024-12-09T14:58:44.744Z] Copying: 571/1024 [MB] (10 MBps) [2024-12-09T14:58:45.691Z] Copying: 581/1024 [MB] (10 MBps) [2024-12-09T14:58:47.079Z] Copying: 592/1024 [MB] (10 MBps) [2024-12-09T14:58:47.653Z] Copying: 602/1024 [MB] (10 MBps) [2024-12-09T14:58:49.041Z] Copying: 619/1024 [MB] (17 MBps) [2024-12-09T14:58:49.985Z] Copying: 634/1024 [MB] (14 MBps) [2024-12-09T14:58:50.930Z] Copying: 651/1024 [MB] (17 MBps) [2024-12-09T14:58:51.876Z] Copying: 671/1024 [MB] (20 MBps) [2024-12-09T14:58:52.820Z] Copying: 685/1024 [MB] (13 MBps) [2024-12-09T14:58:53.766Z] Copying: 696/1024 [MB] (11 MBps) [2024-12-09T14:58:54.711Z] Copying: 706/1024 [MB] (10 MBps) [2024-12-09T14:58:55.654Z] Copying: 718/1024 [MB] (11 MBps) [2024-12-09T14:58:57.042Z] Copying: 739/1024 [MB] (21 MBps) [2024-12-09T14:58:57.987Z] Copying: 754/1024 [MB] (14 MBps) [2024-12-09T14:58:58.932Z] Copying: 767/1024 [MB] (13 MBps) [2024-12-09T14:58:59.875Z] Copying: 780/1024 [MB] (13 MBps) [2024-12-09T14:59:00.822Z] Copying: 794/1024 [MB] (13 MBps) [2024-12-09T14:59:01.766Z] Copying: 807/1024 [MB] (12 MBps) [2024-12-09T14:59:02.710Z] Copying: 826/1024 [MB] (19 MBps) [2024-12-09T14:59:03.653Z] Copying: 842/1024 [MB] (15 MBps) [2024-12-09T14:59:05.043Z] Copying: 858/1024 [MB] (16 MBps) [2024-12-09T14:59:05.990Z] Copying: 872/1024 [MB] (13 MBps) [2024-12-09T14:59:06.932Z] Copying: 890/1024 [MB] (18 MBps) [2024-12-09T14:59:07.873Z] Copying: 904/1024 [MB] (14 MBps) [2024-12-09T14:59:08.815Z] Copying: 915/1024 [MB] (10 MBps) [2024-12-09T14:59:09.756Z] Copying: 925/1024 [MB] (10 MBps) [2024-12-09T14:59:10.699Z] Copying: 936/1024 [MB] (10 MBps) [2024-12-09T14:59:12.082Z] Copying: 947/1024 [MB] (11 MBps) [2024-12-09T14:59:12.654Z] Copying: 957/1024 [MB] (10 MBps) [2024-12-09T14:59:14.043Z] Copying: 968/1024 [MB] (10 MBps) [2024-12-09T14:59:14.992Z] Copying: 979/1024 [MB] (10 MBps) [2024-12-09T14:59:15.956Z] Copying: 990/1024 [MB] (10 MBps) [2024-12-09T14:59:16.937Z] Copying: 1000/1024 [MB] (10 MBps) [2024-12-09T14:59:17.197Z] Copying: 1015/1024 [MB] (15 MBps) [2024-12-09T14:59:17.197Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-09 14:59:17.119344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.075 [2024-12-09 14:59:17.119443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:39.075 [2024-12-09 14:59:17.119481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:39.075 [2024-12-09 14:59:17.119494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.075 [2024-12-09 14:59:17.119526] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:39.075 [2024-12-09 14:59:17.123691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.075 [2024-12-09 14:59:17.123747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:39.075 [2024-12-09 14:59:17.123763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.142 ms 00:26:39.075 [2024-12-09 14:59:17.123776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.075 [2024-12-09 14:59:17.124110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.075 [2024-12-09 14:59:17.124127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:39.075 [2024-12-09 14:59:17.124141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:26:39.075 [2024-12-09 14:59:17.124161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.075 [2024-12-09 14:59:17.131321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.075 [2024-12-09 14:59:17.131375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:39.075 [2024-12-09 14:59:17.131387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.136 ms 00:26:39.075 [2024-12-09 14:59:17.131397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.075 [2024-12-09 14:59:17.137621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.075 [2024-12-09 14:59:17.137671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:39.075 [2024-12-09 14:59:17.137683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.180 ms 00:26:39.075 [2024-12-09 14:59:17.137701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.075 [2024-12-09 14:59:17.165160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.075 [2024-12-09 14:59:17.165214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:39.075 [2024-12-09 14:59:17.165228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.411 ms 00:26:39.075 [2024-12-09 14:59:17.165236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.075 [2024-12-09 14:59:17.182108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.075 [2024-12-09 14:59:17.182159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:39.075 [2024-12-09 14:59:17.182173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.822 ms 00:26:39.075 [2024-12-09 14:59:17.182182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.648 [2024-12-09 14:59:17.521041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.648 [2024-12-09 14:59:17.521116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:39.648 [2024-12-09 14:59:17.521129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 338.802 ms 00:26:39.648 [2024-12-09 14:59:17.521138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.648 [2024-12-09 14:59:17.547754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.648 [2024-12-09 14:59:17.547817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:39.648 [2024-12-09 14:59:17.547830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.598 ms 00:26:39.648 [2024-12-09 14:59:17.547839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.648 [2024-12-09 14:59:17.573109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.648 [2024-12-09 14:59:17.573161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:39.648 [2024-12-09 14:59:17.573174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.217 ms 00:26:39.648 [2024-12-09 14:59:17.573181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.648 [2024-12-09 14:59:17.598153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.648 [2024-12-09 14:59:17.598207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:39.648 [2024-12-09 14:59:17.598220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.923 ms 00:26:39.648 [2024-12-09 14:59:17.598228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.648 [2024-12-09 14:59:17.622754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.648 [2024-12-09 14:59:17.622812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:39.648 [2024-12-09 14:59:17.622825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.449 ms 00:26:39.648 [2024-12-09 14:59:17.622833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.648 [2024-12-09 14:59:17.622880] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:39.648 [2024-12-09 14:59:17.622897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:39.648 [2024-12-09 14:59:17.622908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.622992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:39.648 [2024-12-09 14:59:17.623194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:39.649 [2024-12-09 14:59:17.623904] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:39.649 [2024-12-09 14:59:17.623913] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d275a0e-c7d7-4199-8bd3-cc8b877c7a19 00:26:39.649 [2024-12-09 14:59:17.623922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:39.649 [2024-12-09 14:59:17.623930] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 26560 00:26:39.649 [2024-12-09 14:59:17.623945] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 25600 00:26:39.649 [2024-12-09 14:59:17.623955] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0375 00:26:39.649 [2024-12-09 14:59:17.623971] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:39.649 [2024-12-09 14:59:17.623992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:39.649 [2024-12-09 14:59:17.624001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:39.649 [2024-12-09 14:59:17.624008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:39.649 [2024-12-09 14:59:17.624016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:39.649 [2024-12-09 14:59:17.624024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.649 [2024-12-09 14:59:17.624033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:39.649 [2024-12-09 14:59:17.624041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:26:39.649 [2024-12-09 14:59:17.624048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.649 [2024-12-09 14:59:17.637766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.649 [2024-12-09 14:59:17.637843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:39.649 [2024-12-09 14:59:17.637864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.675 ms 00:26:39.649 [2024-12-09 14:59:17.637873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.649 [2024-12-09 14:59:17.638283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.649 [2024-12-09 14:59:17.638304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:39.649 [2024-12-09 14:59:17.638316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:26:39.649 [2024-12-09 14:59:17.638324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.649 [2024-12-09 14:59:17.674752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.649 [2024-12-09 14:59:17.674821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:39.649 [2024-12-09 14:59:17.674833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.649 [2024-12-09 14:59:17.674842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.649 [2024-12-09 14:59:17.674914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.649 [2024-12-09 14:59:17.674923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:39.649 [2024-12-09 14:59:17.674932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.649 [2024-12-09 14:59:17.674941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.649 [2024-12-09 14:59:17.675028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.650 [2024-12-09 14:59:17.675041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:39.650 [2024-12-09 14:59:17.675056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.650 [2024-12-09 14:59:17.675065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.650 [2024-12-09 14:59:17.675082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.650 [2024-12-09 14:59:17.675090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:39.650 [2024-12-09 14:59:17.675100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.650 [2024-12-09 14:59:17.675108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.650 [2024-12-09 14:59:17.758830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.650 [2024-12-09 14:59:17.758894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:39.650 [2024-12-09 14:59:17.758908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.650 [2024-12-09 14:59:17.758916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.827730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.911 [2024-12-09 14:59:17.827789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:39.911 [2024-12-09 14:59:17.827818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.911 [2024-12-09 14:59:17.827828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.827907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.911 [2024-12-09 14:59:17.827918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.911 [2024-12-09 14:59:17.827928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.911 [2024-12-09 14:59:17.827943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.827985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.911 [2024-12-09 14:59:17.827997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.911 [2024-12-09 14:59:17.828006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.911 [2024-12-09 14:59:17.828015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.828115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.911 [2024-12-09 14:59:17.828126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.911 [2024-12-09 14:59:17.828136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.911 [2024-12-09 14:59:17.828144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.828181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.911 [2024-12-09 14:59:17.828192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:39.911 [2024-12-09 14:59:17.828200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.911 [2024-12-09 14:59:17.828208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.828248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.911 [2024-12-09 14:59:17.828258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.911 [2024-12-09 14:59:17.828266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.911 [2024-12-09 14:59:17.828274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.828323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.911 [2024-12-09 14:59:17.828334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.911 [2024-12-09 14:59:17.828343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.911 [2024-12-09 14:59:17.828350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.911 [2024-12-09 14:59:17.828487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 709.111 ms, result 0 00:26:40.483 00:26:40.483 00:26:40.744 14:59:18 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:43.293 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78711 00:26:43.293 14:59:20 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78711 ']' 00:26:43.293 14:59:20 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78711 00:26:43.293 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78711) - No such process 00:26:43.293 Process with pid 78711 is not found 00:26:43.293 14:59:20 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78711 is not found' 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:43.293 Remove shared memory files 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:43.293 14:59:20 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:43.293 00:26:43.293 real 4m29.232s 00:26:43.293 user 4m16.140s 00:26:43.293 sys 0m13.066s 00:26:43.293 ************************************ 00:26:43.293 END TEST ftl_restore 00:26:43.293 14:59:21 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.293 14:59:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:43.293 ************************************ 00:26:43.293 14:59:21 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:43.293 14:59:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:43.293 14:59:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:43.293 14:59:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:43.293 ************************************ 00:26:43.293 START TEST ftl_dirty_shutdown 00:26:43.293 ************************************ 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:43.293 * Looking for test storage... 00:26:43.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:43.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.293 --rc genhtml_branch_coverage=1 00:26:43.293 --rc genhtml_function_coverage=1 00:26:43.293 --rc genhtml_legend=1 00:26:43.293 --rc geninfo_all_blocks=1 00:26:43.293 --rc geninfo_unexecuted_blocks=1 00:26:43.293 00:26:43.293 ' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:43.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.293 --rc genhtml_branch_coverage=1 00:26:43.293 --rc genhtml_function_coverage=1 00:26:43.293 --rc genhtml_legend=1 00:26:43.293 --rc geninfo_all_blocks=1 00:26:43.293 --rc geninfo_unexecuted_blocks=1 00:26:43.293 00:26:43.293 ' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:43.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.293 --rc genhtml_branch_coverage=1 00:26:43.293 --rc genhtml_function_coverage=1 00:26:43.293 --rc genhtml_legend=1 00:26:43.293 --rc geninfo_all_blocks=1 00:26:43.293 --rc geninfo_unexecuted_blocks=1 00:26:43.293 00:26:43.293 ' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:43.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.293 --rc genhtml_branch_coverage=1 00:26:43.293 --rc genhtml_function_coverage=1 00:26:43.293 --rc genhtml_legend=1 00:26:43.293 --rc geninfo_all_blocks=1 00:26:43.293 --rc geninfo_unexecuted_blocks=1 00:26:43.293 00:26:43.293 ' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.293 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81549 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81549 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81549 ']' 00:26:43.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:43.294 14:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:43.294 [2024-12-09 14:59:21.351000] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:26:43.294 [2024-12-09 14:59:21.351154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81549 ] 00:26:43.554 [2024-12-09 14:59:21.519069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.554 [2024-12-09 14:59:21.641753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:44.498 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:44.760 { 00:26:44.760 "name": "nvme0n1", 00:26:44.760 "aliases": [ 00:26:44.760 "8b40adf7-d2a7-4ad9-ad63-c4163ca3c1f6" 00:26:44.760 ], 00:26:44.760 "product_name": "NVMe disk", 00:26:44.760 "block_size": 4096, 00:26:44.760 "num_blocks": 1310720, 00:26:44.760 "uuid": "8b40adf7-d2a7-4ad9-ad63-c4163ca3c1f6", 00:26:44.760 "numa_id": -1, 00:26:44.760 "assigned_rate_limits": { 00:26:44.760 "rw_ios_per_sec": 0, 00:26:44.760 "rw_mbytes_per_sec": 0, 00:26:44.760 "r_mbytes_per_sec": 0, 00:26:44.760 "w_mbytes_per_sec": 0 00:26:44.760 }, 00:26:44.760 "claimed": true, 00:26:44.760 "claim_type": "read_many_write_one", 00:26:44.760 "zoned": false, 00:26:44.760 "supported_io_types": { 00:26:44.760 "read": true, 00:26:44.760 "write": true, 00:26:44.760 "unmap": true, 00:26:44.760 "flush": true, 00:26:44.760 "reset": true, 00:26:44.760 "nvme_admin": true, 00:26:44.760 "nvme_io": true, 00:26:44.760 "nvme_io_md": false, 00:26:44.760 "write_zeroes": true, 00:26:44.760 "zcopy": false, 00:26:44.760 "get_zone_info": false, 00:26:44.760 "zone_management": false, 00:26:44.760 "zone_append": false, 00:26:44.760 "compare": true, 00:26:44.760 "compare_and_write": false, 00:26:44.760 "abort": true, 00:26:44.760 "seek_hole": false, 00:26:44.760 "seek_data": false, 00:26:44.760 "copy": true, 00:26:44.760 "nvme_iov_md": false 00:26:44.760 }, 00:26:44.760 "driver_specific": { 00:26:44.760 "nvme": [ 00:26:44.760 { 00:26:44.760 "pci_address": "0000:00:11.0", 00:26:44.760 "trid": { 00:26:44.760 "trtype": "PCIe", 00:26:44.760 "traddr": "0000:00:11.0" 00:26:44.760 }, 00:26:44.760 "ctrlr_data": { 00:26:44.760 "cntlid": 0, 00:26:44.760 "vendor_id": "0x1b36", 00:26:44.760 "model_number": "QEMU NVMe Ctrl", 00:26:44.760 "serial_number": "12341", 00:26:44.760 "firmware_revision": "8.0.0", 00:26:44.760 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:44.760 "oacs": { 00:26:44.760 "security": 0, 00:26:44.760 "format": 1, 00:26:44.760 "firmware": 0, 00:26:44.760 "ns_manage": 1 00:26:44.760 }, 00:26:44.760 "multi_ctrlr": false, 00:26:44.760 "ana_reporting": false 00:26:44.760 }, 00:26:44.760 "vs": { 00:26:44.760 "nvme_version": "1.4" 00:26:44.760 }, 00:26:44.760 "ns_data": { 00:26:44.760 "id": 1, 00:26:44.760 "can_share": false 00:26:44.760 } 00:26:44.760 } 00:26:44.760 ], 00:26:44.760 "mp_policy": "active_passive" 00:26:44.760 } 00:26:44.760 } 00:26:44.760 ]' 00:26:44.760 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:45.021 14:59:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:45.021 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5314e2a7-e0b8-4b94-af46-e25d18eaa649 00:26:45.021 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:45.021 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5314e2a7-e0b8-4b94-af46-e25d18eaa649 00:26:45.283 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:45.543 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=18e2036b-c644-48b5-a813-d1bd0f43e2b9 00:26:45.543 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 18e2036b-c644-48b5-a813-d1bd0f43e2b9 00:26:45.801 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3c325894-033b-4bb9-9383-95437514205f 00:26:45.801 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:45.801 14:59:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3c325894-033b-4bb9-9383-95437514205f 00:26:45.801 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3c325894-033b-4bb9-9383-95437514205f 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3c325894-033b-4bb9-9383-95437514205f 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3c325894-033b-4bb9-9383-95437514205f 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:45.802 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c325894-033b-4bb9-9383-95437514205f 00:26:46.060 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:46.060 { 00:26:46.060 "name": "3c325894-033b-4bb9-9383-95437514205f", 00:26:46.060 "aliases": [ 00:26:46.060 "lvs/nvme0n1p0" 00:26:46.060 ], 00:26:46.060 "product_name": "Logical Volume", 00:26:46.060 "block_size": 4096, 00:26:46.060 "num_blocks": 26476544, 00:26:46.060 "uuid": "3c325894-033b-4bb9-9383-95437514205f", 00:26:46.060 "assigned_rate_limits": { 00:26:46.060 "rw_ios_per_sec": 0, 00:26:46.060 "rw_mbytes_per_sec": 0, 00:26:46.060 "r_mbytes_per_sec": 0, 00:26:46.060 "w_mbytes_per_sec": 0 00:26:46.060 }, 00:26:46.060 "claimed": false, 00:26:46.060 "zoned": false, 00:26:46.060 "supported_io_types": { 00:26:46.060 "read": true, 00:26:46.060 "write": true, 00:26:46.060 "unmap": true, 00:26:46.060 "flush": false, 00:26:46.060 "reset": true, 00:26:46.060 "nvme_admin": false, 00:26:46.060 "nvme_io": false, 00:26:46.060 "nvme_io_md": false, 00:26:46.060 "write_zeroes": true, 00:26:46.060 "zcopy": false, 00:26:46.060 "get_zone_info": false, 00:26:46.060 "zone_management": false, 00:26:46.060 "zone_append": false, 00:26:46.060 "compare": false, 00:26:46.060 "compare_and_write": false, 00:26:46.060 "abort": false, 00:26:46.060 "seek_hole": true, 00:26:46.060 "seek_data": true, 00:26:46.060 "copy": false, 00:26:46.060 "nvme_iov_md": false 00:26:46.060 }, 00:26:46.060 "driver_specific": { 00:26:46.060 "lvol": { 00:26:46.060 "lvol_store_uuid": "18e2036b-c644-48b5-a813-d1bd0f43e2b9", 00:26:46.060 "base_bdev": "nvme0n1", 00:26:46.060 "thin_provision": true, 00:26:46.061 "num_allocated_clusters": 0, 00:26:46.061 "snapshot": false, 00:26:46.061 "clone": false, 00:26:46.061 "esnap_clone": false 00:26:46.061 } 00:26:46.061 } 00:26:46.061 } 00:26:46.061 ]' 00:26:46.061 14:59:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:46.061 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3c325894-033b-4bb9-9383-95437514205f 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3c325894-033b-4bb9-9383-95437514205f 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:46.319 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c325894-033b-4bb9-9383-95437514205f 00:26:46.577 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:46.577 { 00:26:46.577 "name": "3c325894-033b-4bb9-9383-95437514205f", 00:26:46.577 "aliases": [ 00:26:46.577 "lvs/nvme0n1p0" 00:26:46.577 ], 00:26:46.577 "product_name": "Logical Volume", 00:26:46.577 "block_size": 4096, 00:26:46.577 "num_blocks": 26476544, 00:26:46.577 "uuid": "3c325894-033b-4bb9-9383-95437514205f", 00:26:46.577 "assigned_rate_limits": { 00:26:46.577 "rw_ios_per_sec": 0, 00:26:46.577 "rw_mbytes_per_sec": 0, 00:26:46.577 "r_mbytes_per_sec": 0, 00:26:46.577 "w_mbytes_per_sec": 0 00:26:46.577 }, 00:26:46.577 "claimed": false, 00:26:46.577 "zoned": false, 00:26:46.577 "supported_io_types": { 00:26:46.577 "read": true, 00:26:46.577 "write": true, 00:26:46.577 "unmap": true, 00:26:46.577 "flush": false, 00:26:46.577 "reset": true, 00:26:46.577 "nvme_admin": false, 00:26:46.577 "nvme_io": false, 00:26:46.577 "nvme_io_md": false, 00:26:46.577 "write_zeroes": true, 00:26:46.577 "zcopy": false, 00:26:46.577 "get_zone_info": false, 00:26:46.577 "zone_management": false, 00:26:46.577 "zone_append": false, 00:26:46.577 "compare": false, 00:26:46.577 "compare_and_write": false, 00:26:46.577 "abort": false, 00:26:46.577 "seek_hole": true, 00:26:46.577 "seek_data": true, 00:26:46.577 "copy": false, 00:26:46.577 "nvme_iov_md": false 00:26:46.577 }, 00:26:46.577 "driver_specific": { 00:26:46.578 "lvol": { 00:26:46.578 "lvol_store_uuid": "18e2036b-c644-48b5-a813-d1bd0f43e2b9", 00:26:46.578 "base_bdev": "nvme0n1", 00:26:46.578 "thin_provision": true, 00:26:46.578 "num_allocated_clusters": 0, 00:26:46.578 "snapshot": false, 00:26:46.578 "clone": false, 00:26:46.578 "esnap_clone": false 00:26:46.578 } 00:26:46.578 } 00:26:46.578 } 00:26:46.578 ]' 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:46.578 14:59:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:46.836 14:59:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:46.836 14:59:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3c325894-033b-4bb9-9383-95437514205f 00:26:46.836 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3c325894-033b-4bb9-9383-95437514205f 00:26:46.836 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:46.836 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:46.836 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:46.836 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c325894-033b-4bb9-9383-95437514205f 00:26:47.094 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:47.094 { 00:26:47.094 "name": "3c325894-033b-4bb9-9383-95437514205f", 00:26:47.094 "aliases": [ 00:26:47.094 "lvs/nvme0n1p0" 00:26:47.094 ], 00:26:47.094 "product_name": "Logical Volume", 00:26:47.094 "block_size": 4096, 00:26:47.094 "num_blocks": 26476544, 00:26:47.094 "uuid": "3c325894-033b-4bb9-9383-95437514205f", 00:26:47.094 "assigned_rate_limits": { 00:26:47.094 "rw_ios_per_sec": 0, 00:26:47.094 "rw_mbytes_per_sec": 0, 00:26:47.094 "r_mbytes_per_sec": 0, 00:26:47.094 "w_mbytes_per_sec": 0 00:26:47.094 }, 00:26:47.094 "claimed": false, 00:26:47.094 "zoned": false, 00:26:47.094 "supported_io_types": { 00:26:47.094 "read": true, 00:26:47.094 "write": true, 00:26:47.094 "unmap": true, 00:26:47.094 "flush": false, 00:26:47.094 "reset": true, 00:26:47.094 "nvme_admin": false, 00:26:47.094 "nvme_io": false, 00:26:47.094 "nvme_io_md": false, 00:26:47.094 "write_zeroes": true, 00:26:47.094 "zcopy": false, 00:26:47.094 "get_zone_info": false, 00:26:47.094 "zone_management": false, 00:26:47.094 "zone_append": false, 00:26:47.094 "compare": false, 00:26:47.094 "compare_and_write": false, 00:26:47.094 "abort": false, 00:26:47.094 "seek_hole": true, 00:26:47.094 "seek_data": true, 00:26:47.094 "copy": false, 00:26:47.094 "nvme_iov_md": false 00:26:47.094 }, 00:26:47.094 "driver_specific": { 00:26:47.094 "lvol": { 00:26:47.094 "lvol_store_uuid": "18e2036b-c644-48b5-a813-d1bd0f43e2b9", 00:26:47.094 "base_bdev": "nvme0n1", 00:26:47.094 "thin_provision": true, 00:26:47.094 "num_allocated_clusters": 0, 00:26:47.094 "snapshot": false, 00:26:47.094 "clone": false, 00:26:47.094 "esnap_clone": false 00:26:47.094 } 00:26:47.094 } 00:26:47.094 } 00:26:47.094 ]' 00:26:47.094 14:59:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:47.094 14:59:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3c325894-033b-4bb9-9383-95437514205f --l2p_dram_limit 10' 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:47.095 14:59:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3c325894-033b-4bb9-9383-95437514205f --l2p_dram_limit 10 -c nvc0n1p0 00:26:47.353 [2024-12-09 14:59:25.223691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.223731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:47.353 [2024-12-09 14:59:25.223744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:47.353 [2024-12-09 14:59:25.223751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.223794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.223817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.353 [2024-12-09 14:59:25.223826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:47.353 [2024-12-09 14:59:25.223832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.223851] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:47.353 [2024-12-09 14:59:25.224454] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:47.353 [2024-12-09 14:59:25.224475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.224481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.353 [2024-12-09 14:59:25.224489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:26:47.353 [2024-12-09 14:59:25.224495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.224596] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d14e84ea-2831-42e5-b340-abc80d689c33 00:26:47.353 [2024-12-09 14:59:25.225540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.225558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:47.353 [2024-12-09 14:59:25.225566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:47.353 [2024-12-09 14:59:25.225573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.230267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.230299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.353 [2024-12-09 14:59:25.230306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.661 ms 00:26:47.353 [2024-12-09 14:59:25.230313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.230379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.230388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.353 [2024-12-09 14:59:25.230395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:47.353 [2024-12-09 14:59:25.230405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.230442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.230451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:47.353 [2024-12-09 14:59:25.230459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:47.353 [2024-12-09 14:59:25.230466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.230483] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:47.353 [2024-12-09 14:59:25.233375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.233483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.353 [2024-12-09 14:59:25.233500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:26:47.353 [2024-12-09 14:59:25.233506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.233538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.233545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:47.353 [2024-12-09 14:59:25.233552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:47.353 [2024-12-09 14:59:25.233558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.233572] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:47.353 [2024-12-09 14:59:25.233682] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:47.353 [2024-12-09 14:59:25.233694] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:47.353 [2024-12-09 14:59:25.233703] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:47.353 [2024-12-09 14:59:25.233712] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:47.353 [2024-12-09 14:59:25.233719] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:47.353 [2024-12-09 14:59:25.233727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:47.353 [2024-12-09 14:59:25.233733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:47.353 [2024-12-09 14:59:25.233743] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:47.353 [2024-12-09 14:59:25.233749] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:47.353 [2024-12-09 14:59:25.233756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.233766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:47.353 [2024-12-09 14:59:25.233774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:26:47.353 [2024-12-09 14:59:25.233779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.233858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.353 [2024-12-09 14:59:25.233866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:47.353 [2024-12-09 14:59:25.233873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:47.353 [2024-12-09 14:59:25.233878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.353 [2024-12-09 14:59:25.233957] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:47.353 [2024-12-09 14:59:25.233965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:47.353 [2024-12-09 14:59:25.233973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.353 [2024-12-09 14:59:25.233978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.353 [2024-12-09 14:59:25.233986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:47.353 [2024-12-09 14:59:25.233991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:47.353 [2024-12-09 14:59:25.233997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:47.353 [2024-12-09 14:59:25.234003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:47.353 [2024-12-09 14:59:25.234009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.353 [2024-12-09 14:59:25.234022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:47.353 [2024-12-09 14:59:25.234027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:47.353 [2024-12-09 14:59:25.234034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.353 [2024-12-09 14:59:25.234040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:47.353 [2024-12-09 14:59:25.234046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:47.353 [2024-12-09 14:59:25.234051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:47.353 [2024-12-09 14:59:25.234064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:47.353 [2024-12-09 14:59:25.234070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:47.353 [2024-12-09 14:59:25.234083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.353 [2024-12-09 14:59:25.234095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:47.353 [2024-12-09 14:59:25.234100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.353 [2024-12-09 14:59:25.234111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:47.353 [2024-12-09 14:59:25.234117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.353 [2024-12-09 14:59:25.234128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:47.353 [2024-12-09 14:59:25.234133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.353 [2024-12-09 14:59:25.234144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:47.353 [2024-12-09 14:59:25.234152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:47.353 [2024-12-09 14:59:25.234157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.353 [2024-12-09 14:59:25.234164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:47.354 [2024-12-09 14:59:25.234169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:47.354 [2024-12-09 14:59:25.234176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.354 [2024-12-09 14:59:25.234181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:47.354 [2024-12-09 14:59:25.234188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:47.354 [2024-12-09 14:59:25.234193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.354 [2024-12-09 14:59:25.234199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:47.354 [2024-12-09 14:59:25.234204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:47.354 [2024-12-09 14:59:25.234210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.354 [2024-12-09 14:59:25.234215] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:47.354 [2024-12-09 14:59:25.234222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:47.354 [2024-12-09 14:59:25.234227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.354 [2024-12-09 14:59:25.234234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.354 [2024-12-09 14:59:25.234241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:47.354 [2024-12-09 14:59:25.234248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:47.354 [2024-12-09 14:59:25.234254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:47.354 [2024-12-09 14:59:25.234260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:47.354 [2024-12-09 14:59:25.234267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:47.354 [2024-12-09 14:59:25.234274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:47.354 [2024-12-09 14:59:25.234280] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:47.354 [2024-12-09 14:59:25.234291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.354 [2024-12-09 14:59:25.234298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:47.354 [2024-12-09 14:59:25.234304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:47.354 [2024-12-09 14:59:25.234310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:47.354 [2024-12-09 14:59:25.234317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:47.354 [2024-12-09 14:59:25.234322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:47.354 [2024-12-09 14:59:25.234329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:47.354 [2024-12-09 14:59:25.234334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:47.354 [2024-12-09 14:59:25.234342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:47.354 [2024-12-09 14:59:25.234347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:47.354 [2024-12-09 14:59:25.234356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:47.354 [2024-12-09 14:59:25.234361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:47.354 [2024-12-09 14:59:25.234368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:47.354 [2024-12-09 14:59:25.234373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:47.354 [2024-12-09 14:59:25.234380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:47.354 [2024-12-09 14:59:25.234385] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:47.354 [2024-12-09 14:59:25.234393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.354 [2024-12-09 14:59:25.234400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.354 [2024-12-09 14:59:25.234407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:47.354 [2024-12-09 14:59:25.234412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:47.354 [2024-12-09 14:59:25.234419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:47.354 [2024-12-09 14:59:25.234425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.354 [2024-12-09 14:59:25.234432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:47.354 [2024-12-09 14:59:25.234438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:26:47.354 [2024-12-09 14:59:25.234444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.354 [2024-12-09 14:59:25.234485] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:47.354 [2024-12-09 14:59:25.234496] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:51.562 [2024-12-09 14:59:28.996531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:28.996625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:51.562 [2024-12-09 14:59:28.996645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3762.030 ms 00:26:51.562 [2024-12-09 14:59:28.996657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.029683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.029754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.562 [2024-12-09 14:59:29.029770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.750 ms 00:26:51.562 [2024-12-09 14:59:29.029781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.029977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.029994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:51.562 [2024-12-09 14:59:29.030004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:51.562 [2024-12-09 14:59:29.030022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.065848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.065903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.562 [2024-12-09 14:59:29.065916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.788 ms 00:26:51.562 [2024-12-09 14:59:29.065927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.065965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.065981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.562 [2024-12-09 14:59:29.065990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:51.562 [2024-12-09 14:59:29.066008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.066598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.066629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.562 [2024-12-09 14:59:29.066641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:26:51.562 [2024-12-09 14:59:29.066651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.066768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.066780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.562 [2024-12-09 14:59:29.066792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:51.562 [2024-12-09 14:59:29.066838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.084686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.084937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.562 [2024-12-09 14:59:29.084959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.826 ms 00:26:51.562 [2024-12-09 14:59:29.084970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.110733] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:51.562 [2024-12-09 14:59:29.115132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.115186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:51.562 [2024-12-09 14:59:29.115205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.060 ms 00:26:51.562 [2024-12-09 14:59:29.115216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.221053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.221313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:51.562 [2024-12-09 14:59:29.221349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.779 ms 00:26:51.562 [2024-12-09 14:59:29.221359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.221570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.221586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:51.562 [2024-12-09 14:59:29.221601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:26:51.562 [2024-12-09 14:59:29.221610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.248681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.248903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:51.562 [2024-12-09 14:59:29.248935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.988 ms 00:26:51.562 [2024-12-09 14:59:29.248944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.274988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.275039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:51.562 [2024-12-09 14:59:29.275055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.986 ms 00:26:51.562 [2024-12-09 14:59:29.275063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.275693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.562 [2024-12-09 14:59:29.275715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:51.562 [2024-12-09 14:59:29.275727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:26:51.562 [2024-12-09 14:59:29.275739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.562 [2024-12-09 14:59:29.365381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.563 [2024-12-09 14:59:29.365436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:51.563 [2024-12-09 14:59:29.365459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.574 ms 00:26:51.563 [2024-12-09 14:59:29.365467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.563 [2024-12-09 14:59:29.394075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.563 [2024-12-09 14:59:29.394127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:51.563 [2024-12-09 14:59:29.394145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.511 ms 00:26:51.563 [2024-12-09 14:59:29.394154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.563 [2024-12-09 14:59:29.420829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.563 [2024-12-09 14:59:29.420878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:51.563 [2024-12-09 14:59:29.420893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.613 ms 00:26:51.563 [2024-12-09 14:59:29.420901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.563 [2024-12-09 14:59:29.448153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.563 [2024-12-09 14:59:29.448350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:51.563 [2024-12-09 14:59:29.448379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.190 ms 00:26:51.563 [2024-12-09 14:59:29.448386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.563 [2024-12-09 14:59:29.448441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.563 [2024-12-09 14:59:29.448451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:51.563 [2024-12-09 14:59:29.448467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:51.563 [2024-12-09 14:59:29.448475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.563 [2024-12-09 14:59:29.448585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.563 [2024-12-09 14:59:29.448599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:51.563 [2024-12-09 14:59:29.448610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:51.563 [2024-12-09 14:59:29.448618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.563 [2024-12-09 14:59:29.449838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4225.595 ms, result 0 00:26:51.563 { 00:26:51.563 "name": "ftl0", 00:26:51.563 "uuid": "d14e84ea-2831-42e5-b340-abc80d689c33" 00:26:51.563 } 00:26:51.563 14:59:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:51.563 14:59:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:51.823 /dev/nbd0 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:51.823 1+0 records in 00:26:51.823 1+0 records out 00:26:51.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706039 s, 5.8 MB/s 00:26:51.823 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:52.084 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:52.084 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:52.084 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:52.084 14:59:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:52.084 14:59:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:52.084 [2024-12-09 14:59:30.023199] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:26:52.084 [2024-12-09 14:59:30.023362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81697 ] 00:26:52.084 [2024-12-09 14:59:30.190793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.346 [2024-12-09 14:59:30.335647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.737  [2024-12-09T14:59:32.802Z] Copying: 185/1024 [MB] (185 MBps) [2024-12-09T14:59:33.738Z] Copying: 415/1024 [MB] (229 MBps) [2024-12-09T14:59:34.673Z] Copying: 669/1024 [MB] (253 MBps) [2024-12-09T14:59:35.239Z] Copying: 915/1024 [MB] (246 MBps) [2024-12-09T14:59:35.806Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:26:57.684 00:26:57.684 14:59:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:59.587 14:59:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:59.587 [2024-12-09 14:59:37.678863] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:26:59.587 [2024-12-09 14:59:37.678984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81780 ] 00:26:59.845 [2024-12-09 14:59:37.834515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.845 [2024-12-09 14:59:37.929451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.227  [2024-12-09T14:59:40.285Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-09T14:59:41.227Z] Copying: 42/1024 [MB] (24 MBps) [2024-12-09T14:59:42.163Z] Copying: 60/1024 [MB] (17 MBps) [2024-12-09T14:59:43.548Z] Copying: 77/1024 [MB] (17 MBps) [2024-12-09T14:59:44.487Z] Copying: 96/1024 [MB] (18 MBps) [2024-12-09T14:59:45.431Z] Copying: 112/1024 [MB] (16 MBps) [2024-12-09T14:59:46.371Z] Copying: 130/1024 [MB] (17 MBps) [2024-12-09T14:59:47.315Z] Copying: 148/1024 [MB] (18 MBps) [2024-12-09T14:59:48.256Z] Copying: 167/1024 [MB] (19 MBps) [2024-12-09T14:59:49.198Z] Copying: 186/1024 [MB] (18 MBps) [2024-12-09T14:59:50.584Z] Copying: 207/1024 [MB] (20 MBps) [2024-12-09T14:59:51.162Z] Copying: 226/1024 [MB] (19 MBps) [2024-12-09T14:59:52.205Z] Copying: 244/1024 [MB] (18 MBps) [2024-12-09T14:59:53.586Z] Copying: 259/1024 [MB] (15 MBps) [2024-12-09T14:59:54.158Z] Copying: 278/1024 [MB] (19 MBps) [2024-12-09T14:59:55.543Z] Copying: 297/1024 [MB] (18 MBps) [2024-12-09T14:59:56.481Z] Copying: 312/1024 [MB] (15 MBps) [2024-12-09T14:59:57.424Z] Copying: 345/1024 [MB] (33 MBps) [2024-12-09T14:59:58.366Z] Copying: 363/1024 [MB] (17 MBps) [2024-12-09T14:59:59.308Z] Copying: 385/1024 [MB] (22 MBps) [2024-12-09T15:00:00.242Z] Copying: 405/1024 [MB] (19 MBps) [2024-12-09T15:00:01.176Z] Copying: 433/1024 [MB] (27 MBps) [2024-12-09T15:00:02.549Z] Copying: 468/1024 [MB] (34 MBps) [2024-12-09T15:00:03.484Z] Copying: 502/1024 [MB] (34 MBps) [2024-12-09T15:00:04.419Z] Copying: 536/1024 [MB] (34 MBps) [2024-12-09T15:00:05.360Z] Copying: 571/1024 [MB] (34 MBps) [2024-12-09T15:00:06.296Z] Copying: 588/1024 [MB] (17 MBps) [2024-12-09T15:00:07.234Z] Copying: 618/1024 [MB] (29 MBps) [2024-12-09T15:00:08.175Z] Copying: 641/1024 [MB] (23 MBps) [2024-12-09T15:00:09.559Z] Copying: 662/1024 [MB] (20 MBps) [2024-12-09T15:00:10.500Z] Copying: 683/1024 [MB] (21 MBps) [2024-12-09T15:00:11.443Z] Copying: 701/1024 [MB] (18 MBps) [2024-12-09T15:00:12.386Z] Copying: 716/1024 [MB] (14 MBps) [2024-12-09T15:00:13.328Z] Copying: 729/1024 [MB] (13 MBps) [2024-12-09T15:00:14.269Z] Copying: 746/1024 [MB] (16 MBps) [2024-12-09T15:00:15.210Z] Copying: 765/1024 [MB] (18 MBps) [2024-12-09T15:00:16.585Z] Copying: 786/1024 [MB] (21 MBps) [2024-12-09T15:00:17.154Z] Copying: 821/1024 [MB] (35 MBps) [2024-12-09T15:00:18.534Z] Copying: 841/1024 [MB] (19 MBps) [2024-12-09T15:00:19.474Z] Copying: 859/1024 [MB] (18 MBps) [2024-12-09T15:00:20.417Z] Copying: 879/1024 [MB] (20 MBps) [2024-12-09T15:00:21.354Z] Copying: 897/1024 [MB] (17 MBps) [2024-12-09T15:00:22.294Z] Copying: 915/1024 [MB] (18 MBps) [2024-12-09T15:00:23.237Z] Copying: 929/1024 [MB] (13 MBps) [2024-12-09T15:00:24.180Z] Copying: 944/1024 [MB] (14 MBps) [2024-12-09T15:00:25.565Z] Copying: 960/1024 [MB] (16 MBps) [2024-12-09T15:00:26.500Z] Copying: 974/1024 [MB] (13 MBps) [2024-12-09T15:00:26.768Z] Copying: 1005/1024 [MB] (31 MBps) [2024-12-09T15:00:27.374Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:27:49.252 00:27:49.252 15:00:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:49.252 15:00:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:49.510 15:00:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:49.769 [2024-12-09 15:00:27.648503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.648542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:49.769 [2024-12-09 15:00:27.648554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:49.769 [2024-12-09 15:00:27.648562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.648582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:49.769 [2024-12-09 15:00:27.650591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.650715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:49.769 [2024-12-09 15:00:27.650732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:27:49.769 [2024-12-09 15:00:27.650738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.652595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.652622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:49.769 [2024-12-09 15:00:27.652632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.833 ms 00:27:49.769 [2024-12-09 15:00:27.652638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.666984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.667013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:49.769 [2024-12-09 15:00:27.667023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.327 ms 00:27:49.769 [2024-12-09 15:00:27.667030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.671799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.671828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:49.769 [2024-12-09 15:00:27.671838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.740 ms 00:27:49.769 [2024-12-09 15:00:27.671844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.690079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.690106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:49.769 [2024-12-09 15:00:27.690116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.181 ms 00:27:49.769 [2024-12-09 15:00:27.690122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.702591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.702620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:49.769 [2024-12-09 15:00:27.702632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.437 ms 00:27:49.769 [2024-12-09 15:00:27.702639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.702747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.702755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:49.769 [2024-12-09 15:00:27.702763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:49.769 [2024-12-09 15:00:27.702769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.721136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.721256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:49.769 [2024-12-09 15:00:27.721272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.352 ms 00:27:49.769 [2024-12-09 15:00:27.721278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.738840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.738866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:49.769 [2024-12-09 15:00:27.738875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.534 ms 00:27:49.769 [2024-12-09 15:00:27.738881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.756049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.756152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:49.769 [2024-12-09 15:00:27.756167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.136 ms 00:27:49.769 [2024-12-09 15:00:27.756172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.773438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.769 [2024-12-09 15:00:27.773464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:49.769 [2024-12-09 15:00:27.773473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.211 ms 00:27:49.769 [2024-12-09 15:00:27.773479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.769 [2024-12-09 15:00:27.773507] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:49.769 [2024-12-09 15:00:27.773518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:49.769 [2024-12-09 15:00:27.773746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.773999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:49.770 [2024-12-09 15:00:27.774194] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:49.770 [2024-12-09 15:00:27.774201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d14e84ea-2831-42e5-b340-abc80d689c33 00:27:49.770 [2024-12-09 15:00:27.774207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:49.770 [2024-12-09 15:00:27.774216] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:49.770 [2024-12-09 15:00:27.774223] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:49.770 [2024-12-09 15:00:27.774229] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:49.770 [2024-12-09 15:00:27.774235] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:49.770 [2024-12-09 15:00:27.774242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:49.770 [2024-12-09 15:00:27.774248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:49.770 [2024-12-09 15:00:27.774254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:49.770 [2024-12-09 15:00:27.774260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:49.770 [2024-12-09 15:00:27.774266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.770 [2024-12-09 15:00:27.774272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:49.770 [2024-12-09 15:00:27.774280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:27:49.770 [2024-12-09 15:00:27.774286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.770 [2024-12-09 15:00:27.784248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.770 [2024-12-09 15:00:27.784274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:49.770 [2024-12-09 15:00:27.784283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.938 ms 00:27:49.770 [2024-12-09 15:00:27.784289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.770 [2024-12-09 15:00:27.784560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.770 [2024-12-09 15:00:27.784566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:49.770 [2024-12-09 15:00:27.784574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:27:49.770 [2024-12-09 15:00:27.784580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.770 [2024-12-09 15:00:27.817650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.770 [2024-12-09 15:00:27.817678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.770 [2024-12-09 15:00:27.817688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.770 [2024-12-09 15:00:27.817695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.770 [2024-12-09 15:00:27.817735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.770 [2024-12-09 15:00:27.817741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.770 [2024-12-09 15:00:27.817749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.770 [2024-12-09 15:00:27.817754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.770 [2024-12-09 15:00:27.817826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.771 [2024-12-09 15:00:27.817837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.771 [2024-12-09 15:00:27.817844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.771 [2024-12-09 15:00:27.817850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.771 [2024-12-09 15:00:27.817866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.771 [2024-12-09 15:00:27.817872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.771 [2024-12-09 15:00:27.817879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.771 [2024-12-09 15:00:27.817885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.771 [2024-12-09 15:00:27.876564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.771 [2024-12-09 15:00:27.876598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.771 [2024-12-09 15:00:27.876608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.771 [2024-12-09 15:00:27.876614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.029 [2024-12-09 15:00:27.924299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:50.029 [2024-12-09 15:00:27.924309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.029 [2024-12-09 15:00:27.924316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.029 [2024-12-09 15:00:27.924407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:50.029 [2024-12-09 15:00:27.924417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.029 [2024-12-09 15:00:27.924423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.029 [2024-12-09 15:00:27.924469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:50.029 [2024-12-09 15:00:27.924476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.029 [2024-12-09 15:00:27.924482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.029 [2024-12-09 15:00:27.924561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:50.029 [2024-12-09 15:00:27.924568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.029 [2024-12-09 15:00:27.924576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.029 [2024-12-09 15:00:27.924608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:50.029 [2024-12-09 15:00:27.924615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.029 [2024-12-09 15:00:27.924622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.029 [2024-12-09 15:00:27.924657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:50.029 [2024-12-09 15:00:27.924664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.029 [2024-12-09 15:00:27.924671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.029 [2024-12-09 15:00:27.924715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:50.029 [2024-12-09 15:00:27.924722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.029 [2024-12-09 15:00:27.924727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.029 [2024-12-09 15:00:27.924843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.295 ms, result 0 00:27:50.029 true 00:27:50.029 15:00:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81549 00:27:50.029 15:00:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81549 00:27:50.029 15:00:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:50.029 [2024-12-09 15:00:27.999358] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:27:50.029 [2024-12-09 15:00:27.999541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82302 ] 00:27:50.029 [2024-12-09 15:00:28.148742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.287 [2024-12-09 15:00:28.227146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.661  [2024-12-09T15:00:30.716Z] Copying: 256/1024 [MB] (256 MBps) [2024-12-09T15:00:31.651Z] Copying: 513/1024 [MB] (257 MBps) [2024-12-09T15:00:32.586Z] Copying: 769/1024 [MB] (256 MBps) [2024-12-09T15:00:33.153Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:27:55.031 00:27:55.031 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81549 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:55.031 15:00:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:55.031 [2024-12-09 15:00:33.033301] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:27:55.031 [2024-12-09 15:00:33.033587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82356 ] 00:27:55.290 [2024-12-09 15:00:33.188378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.290 [2024-12-09 15:00:33.267112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.548 [2024-12-09 15:00:33.478329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:55.548 [2024-12-09 15:00:33.478383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:55.548 [2024-12-09 15:00:33.540845] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:55.548 [2024-12-09 15:00:33.541132] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:55.548 [2024-12-09 15:00:33.541488] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:55.808 [2024-12-09 15:00:33.733265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.733299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:55.808 [2024-12-09 15:00:33.733309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:55.808 [2024-12-09 15:00:33.733318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.733352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.733359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:55.808 [2024-12-09 15:00:33.733366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:55.808 [2024-12-09 15:00:33.733372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.733385] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:55.808 [2024-12-09 15:00:33.733935] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:55.808 [2024-12-09 15:00:33.733947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.733953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:55.808 [2024-12-09 15:00:33.733959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:27:55.808 [2024-12-09 15:00:33.733965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.734878] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:55.808 [2024-12-09 15:00:33.744579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.744710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:55.808 [2024-12-09 15:00:33.744724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.702 ms 00:27:55.808 [2024-12-09 15:00:33.744730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.744773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.744782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:55.808 [2024-12-09 15:00:33.744788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:55.808 [2024-12-09 15:00:33.744794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.749247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.749273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:55.808 [2024-12-09 15:00:33.749281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.389 ms 00:27:55.808 [2024-12-09 15:00:33.749287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.749348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.749355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:55.808 [2024-12-09 15:00:33.749361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:55.808 [2024-12-09 15:00:33.749366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.749398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.749405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:55.808 [2024-12-09 15:00:33.749411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:55.808 [2024-12-09 15:00:33.749416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.749430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:55.808 [2024-12-09 15:00:33.752099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.752123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:55.808 [2024-12-09 15:00:33.752130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:27:55.808 [2024-12-09 15:00:33.752135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.752166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.808 [2024-12-09 15:00:33.752173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:55.808 [2024-12-09 15:00:33.752180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:55.808 [2024-12-09 15:00:33.752186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.808 [2024-12-09 15:00:33.752201] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:55.808 [2024-12-09 15:00:33.752217] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:55.809 [2024-12-09 15:00:33.752243] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:55.809 [2024-12-09 15:00:33.752255] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:55.809 [2024-12-09 15:00:33.752333] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:55.809 [2024-12-09 15:00:33.752341] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:55.809 [2024-12-09 15:00:33.752349] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:55.809 [2024-12-09 15:00:33.752358] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752371] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:55.809 [2024-12-09 15:00:33.752377] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:55.809 [2024-12-09 15:00:33.752382] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:55.809 [2024-12-09 15:00:33.752388] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:55.809 [2024-12-09 15:00:33.752393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.809 [2024-12-09 15:00:33.752399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:55.809 [2024-12-09 15:00:33.752405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:27:55.809 [2024-12-09 15:00:33.752410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.809 [2024-12-09 15:00:33.752472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.809 [2024-12-09 15:00:33.752480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:55.809 [2024-12-09 15:00:33.752487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:55.809 [2024-12-09 15:00:33.752492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.809 [2024-12-09 15:00:33.752568] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:55.809 [2024-12-09 15:00:33.752576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:55.809 [2024-12-09 15:00:33.752583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:55.809 [2024-12-09 15:00:33.752599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:55.809 [2024-12-09 15:00:33.752615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:55.809 [2024-12-09 15:00:33.752629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:55.809 [2024-12-09 15:00:33.752635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:55.809 [2024-12-09 15:00:33.752640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:55.809 [2024-12-09 15:00:33.752645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:55.809 [2024-12-09 15:00:33.752653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:55.809 [2024-12-09 15:00:33.752658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:55.809 [2024-12-09 15:00:33.752669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:55.809 [2024-12-09 15:00:33.752684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:55.809 [2024-12-09 15:00:33.752700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:55.809 [2024-12-09 15:00:33.752714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:55.809 [2024-12-09 15:00:33.752729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:55.809 [2024-12-09 15:00:33.752744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:55.809 [2024-12-09 15:00:33.752754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:55.809 [2024-12-09 15:00:33.752759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:55.809 [2024-12-09 15:00:33.752764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:55.809 [2024-12-09 15:00:33.752769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:55.809 [2024-12-09 15:00:33.752774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:55.809 [2024-12-09 15:00:33.752779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:55.809 [2024-12-09 15:00:33.752788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:55.809 [2024-12-09 15:00:33.752793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:55.809 [2024-12-09 15:00:33.752819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:55.809 [2024-12-09 15:00:33.752827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:55.809 [2024-12-09 15:00:33.752843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:55.809 [2024-12-09 15:00:33.752848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:55.809 [2024-12-09 15:00:33.752853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:55.809 [2024-12-09 15:00:33.752865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:55.809 [2024-12-09 15:00:33.752870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:55.809 [2024-12-09 15:00:33.752876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:55.809 [2024-12-09 15:00:33.752882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:55.809 [2024-12-09 15:00:33.752889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.809 [2024-12-09 15:00:33.752896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:55.809 [2024-12-09 15:00:33.752902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:55.809 [2024-12-09 15:00:33.752912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:55.809 [2024-12-09 15:00:33.752918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:55.809 [2024-12-09 15:00:33.752923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:55.809 [2024-12-09 15:00:33.752928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:55.809 [2024-12-09 15:00:33.752934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:55.809 [2024-12-09 15:00:33.752939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:55.809 [2024-12-09 15:00:33.752945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:55.809 [2024-12-09 15:00:33.752950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:55.809 [2024-12-09 15:00:33.752955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:55.809 [2024-12-09 15:00:33.752961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:55.809 [2024-12-09 15:00:33.752966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:55.809 [2024-12-09 15:00:33.752972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:55.809 [2024-12-09 15:00:33.752977] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:55.809 [2024-12-09 15:00:33.752983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.809 [2024-12-09 15:00:33.752989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:55.809 [2024-12-09 15:00:33.752994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:55.809 [2024-12-09 15:00:33.753000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:55.809 [2024-12-09 15:00:33.753005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:55.809 [2024-12-09 15:00:33.753011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.809 [2024-12-09 15:00:33.753017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:55.809 [2024-12-09 15:00:33.753023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:27:55.809 [2024-12-09 15:00:33.753029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.809 [2024-12-09 15:00:33.773859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.809 [2024-12-09 15:00:33.773887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.810 [2024-12-09 15:00:33.773895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.796 ms 00:27:55.810 [2024-12-09 15:00:33.773901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.773969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.773975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:55.810 [2024-12-09 15:00:33.773981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:55.810 [2024-12-09 15:00:33.773987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.812493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.812632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.810 [2024-12-09 15:00:33.812650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.464 ms 00:27:55.810 [2024-12-09 15:00:33.812657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.812696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.812704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.810 [2024-12-09 15:00:33.812711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:55.810 [2024-12-09 15:00:33.812717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.813056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.813070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.810 [2024-12-09 15:00:33.813078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:27:55.810 [2024-12-09 15:00:33.813087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.813188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.813195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.810 [2024-12-09 15:00:33.813201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:55.810 [2024-12-09 15:00:33.813207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.823681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.823782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:55.810 [2024-12-09 15:00:33.823794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.458 ms 00:27:55.810 [2024-12-09 15:00:33.823813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.833598] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:55.810 [2024-12-09 15:00:33.833625] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:55.810 [2024-12-09 15:00:33.833634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.833641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:55.810 [2024-12-09 15:00:33.833648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.744 ms 00:27:55.810 [2024-12-09 15:00:33.833654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.852419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.852446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:55.810 [2024-12-09 15:00:33.852455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.732 ms 00:27:55.810 [2024-12-09 15:00:33.852463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.861575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.861600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:55.810 [2024-12-09 15:00:33.861609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.077 ms 00:27:55.810 [2024-12-09 15:00:33.861614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.870712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.870739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:55.810 [2024-12-09 15:00:33.870746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.069 ms 00:27:55.810 [2024-12-09 15:00:33.870752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.871264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.871284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:55.810 [2024-12-09 15:00:33.871292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:27:55.810 [2024-12-09 15:00:33.871298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.915199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.915241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:55.810 [2024-12-09 15:00:33.915252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.886 ms 00:27:55.810 [2024-12-09 15:00:33.915259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.923174] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:55.810 [2024-12-09 15:00:33.925366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.925388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:55.810 [2024-12-09 15:00:33.925398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.062 ms 00:27:55.810 [2024-12-09 15:00:33.925408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.925479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.925487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:55.810 [2024-12-09 15:00:33.925494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:55.810 [2024-12-09 15:00:33.925500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.925549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.925557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:55.810 [2024-12-09 15:00:33.925563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:55.810 [2024-12-09 15:00:33.925569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.925586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.925593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:55.810 [2024-12-09 15:00:33.925599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:55.810 [2024-12-09 15:00:33.925604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.810 [2024-12-09 15:00:33.925629] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:55.810 [2024-12-09 15:00:33.925637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.810 [2024-12-09 15:00:33.925643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:55.810 [2024-12-09 15:00:33.925649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:55.810 [2024-12-09 15:00:33.925658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.069 [2024-12-09 15:00:33.944448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.069 [2024-12-09 15:00:33.944482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:56.069 [2024-12-09 15:00:33.944492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.775 ms 00:27:56.069 [2024-12-09 15:00:33.944499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.069 [2024-12-09 15:00:33.944556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.069 [2024-12-09 15:00:33.944563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:56.069 [2024-12-09 15:00:33.944570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:56.069 [2024-12-09 15:00:33.944576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.069 [2024-12-09 15:00:33.945411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.803 ms, result 0 00:27:57.002  [2024-12-09T15:00:36.064Z] Copying: 40/1024 [MB] (40 MBps) [2024-12-09T15:00:37.008Z] Copying: 62/1024 [MB] (22 MBps) [2024-12-09T15:00:38.393Z] Copying: 72/1024 [MB] (10 MBps) [2024-12-09T15:00:38.966Z] Copying: 95/1024 [MB] (22 MBps) [2024-12-09T15:00:40.351Z] Copying: 112/1024 [MB] (17 MBps) [2024-12-09T15:00:41.296Z] Copying: 138/1024 [MB] (25 MBps) [2024-12-09T15:00:42.239Z] Copying: 184/1024 [MB] (46 MBps) [2024-12-09T15:00:43.180Z] Copying: 206/1024 [MB] (21 MBps) [2024-12-09T15:00:44.118Z] Copying: 227/1024 [MB] (21 MBps) [2024-12-09T15:00:45.058Z] Copying: 255/1024 [MB] (27 MBps) [2024-12-09T15:00:46.001Z] Copying: 278/1024 [MB] (23 MBps) [2024-12-09T15:00:47.388Z] Copying: 301/1024 [MB] (23 MBps) [2024-12-09T15:00:47.958Z] Copying: 323/1024 [MB] (22 MBps) [2024-12-09T15:00:49.335Z] Copying: 341/1024 [MB] (17 MBps) [2024-12-09T15:00:50.282Z] Copying: 385/1024 [MB] (43 MBps) [2024-12-09T15:00:51.219Z] Copying: 409/1024 [MB] (24 MBps) [2024-12-09T15:00:52.162Z] Copying: 434/1024 [MB] (24 MBps) [2024-12-09T15:00:53.105Z] Copying: 461/1024 [MB] (27 MBps) [2024-12-09T15:00:54.046Z] Copying: 481/1024 [MB] (20 MBps) [2024-12-09T15:00:54.990Z] Copying: 493/1024 [MB] (12 MBps) [2024-12-09T15:00:56.366Z] Copying: 510/1024 [MB] (16 MBps) [2024-12-09T15:00:57.300Z] Copying: 550/1024 [MB] (39 MBps) [2024-12-09T15:00:58.235Z] Copying: 580/1024 [MB] (30 MBps) [2024-12-09T15:00:59.180Z] Copying: 628/1024 [MB] (48 MBps) [2024-12-09T15:01:00.126Z] Copying: 649/1024 [MB] (21 MBps) [2024-12-09T15:01:01.135Z] Copying: 670/1024 [MB] (20 MBps) [2024-12-09T15:01:02.078Z] Copying: 684/1024 [MB] (14 MBps) [2024-12-09T15:01:03.022Z] Copying: 703/1024 [MB] (18 MBps) [2024-12-09T15:01:03.968Z] Copying: 714/1024 [MB] (10 MBps) [2024-12-09T15:01:05.356Z] Copying: 730/1024 [MB] (16 MBps) [2024-12-09T15:01:06.298Z] Copying: 747/1024 [MB] (17 MBps) [2024-12-09T15:01:07.244Z] Copying: 762/1024 [MB] (14 MBps) [2024-12-09T15:01:08.189Z] Copying: 780/1024 [MB] (18 MBps) [2024-12-09T15:01:09.132Z] Copying: 798/1024 [MB] (17 MBps) [2024-12-09T15:01:10.078Z] Copying: 818/1024 [MB] (20 MBps) [2024-12-09T15:01:11.019Z] Copying: 832/1024 [MB] (13 MBps) [2024-12-09T15:01:11.962Z] Copying: 843/1024 [MB] (11 MBps) [2024-12-09T15:01:13.341Z] Copying: 853/1024 [MB] (10 MBps) [2024-12-09T15:01:14.274Z] Copying: 867/1024 [MB] (14 MBps) [2024-12-09T15:01:15.208Z] Copying: 918/1024 [MB] (50 MBps) [2024-12-09T15:01:16.141Z] Copying: 951/1024 [MB] (32 MBps) [2024-12-09T15:01:17.075Z] Copying: 979/1024 [MB] (28 MBps) [2024-12-09T15:01:17.641Z] Copying: 1023/1024 [MB] (43 MBps) [2024-12-09T15:01:17.641Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 15:01:17.631620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.519 [2024-12-09 15:01:17.631761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:39.519 [2024-12-09 15:01:17.631779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:39.519 [2024-12-09 15:01:17.631786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.519 [2024-12-09 15:01:17.633383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:39.519 [2024-12-09 15:01:17.638029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.519 [2024-12-09 15:01:17.638059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:39.519 [2024-12-09 15:01:17.638068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:28:39.519 [2024-12-09 15:01:17.638079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.646864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.646904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:39.779 [2024-12-09 15:01:17.646912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.496 ms 00:28:39.779 [2024-12-09 15:01:17.646919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.663557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.663588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:39.779 [2024-12-09 15:01:17.663597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.625 ms 00:28:39.779 [2024-12-09 15:01:17.663603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.668375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.668400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:39.779 [2024-12-09 15:01:17.668408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:28:39.779 [2024-12-09 15:01:17.668415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.687036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.687065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:39.779 [2024-12-09 15:01:17.687074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.583 ms 00:28:39.779 [2024-12-09 15:01:17.687080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.698728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.698758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:39.779 [2024-12-09 15:01:17.698767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.620 ms 00:28:39.779 [2024-12-09 15:01:17.698775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.757471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.757510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:39.779 [2024-12-09 15:01:17.757522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.660 ms 00:28:39.779 [2024-12-09 15:01:17.757528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.775391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.775417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:39.779 [2024-12-09 15:01:17.775425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.851 ms 00:28:39.779 [2024-12-09 15:01:17.775436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.793162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.793189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:39.779 [2024-12-09 15:01:17.793197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.700 ms 00:28:39.779 [2024-12-09 15:01:17.793202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.810824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.810850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:39.779 [2024-12-09 15:01:17.810857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.595 ms 00:28:39.779 [2024-12-09 15:01:17.810863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.828116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.779 [2024-12-09 15:01:17.828141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:39.779 [2024-12-09 15:01:17.828149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.211 ms 00:28:39.779 [2024-12-09 15:01:17.828155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.779 [2024-12-09 15:01:17.828179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:39.779 [2024-12-09 15:01:17.828190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119040 / 261120 wr_cnt: 1 state: open 00:28:39.779 [2024-12-09 15:01:17.828197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:39.779 [2024-12-09 15:01:17.828418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:39.780 [2024-12-09 15:01:17.828777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:39.780 [2024-12-09 15:01:17.828783] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d14e84ea-2831-42e5-b340-abc80d689c33 00:28:39.780 [2024-12-09 15:01:17.828795] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119040 00:28:39.780 [2024-12-09 15:01:17.828809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120000 00:28:39.780 [2024-12-09 15:01:17.828815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119040 00:28:39.780 [2024-12-09 15:01:17.828821] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:28:39.780 [2024-12-09 15:01:17.828827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:39.780 [2024-12-09 15:01:17.828833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:39.780 [2024-12-09 15:01:17.828838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:39.780 [2024-12-09 15:01:17.828843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:39.780 [2024-12-09 15:01:17.828848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:39.780 [2024-12-09 15:01:17.828854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.780 [2024-12-09 15:01:17.828860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:39.780 [2024-12-09 15:01:17.828866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:28:39.780 [2024-12-09 15:01:17.828872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.780 [2024-12-09 15:01:17.838373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.780 [2024-12-09 15:01:17.838398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:39.780 [2024-12-09 15:01:17.838406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.490 ms 00:28:39.780 [2024-12-09 15:01:17.838413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.780 [2024-12-09 15:01:17.838675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.780 [2024-12-09 15:01:17.838688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:39.780 [2024-12-09 15:01:17.838698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:28:39.780 [2024-12-09 15:01:17.838704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.780 [2024-12-09 15:01:17.864441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.780 [2024-12-09 15:01:17.864470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:39.780 [2024-12-09 15:01:17.864477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.780 [2024-12-09 15:01:17.864483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.780 [2024-12-09 15:01:17.864524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.780 [2024-12-09 15:01:17.864530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:39.780 [2024-12-09 15:01:17.864539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.780 [2024-12-09 15:01:17.864545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.780 [2024-12-09 15:01:17.864603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.780 [2024-12-09 15:01:17.864611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:39.780 [2024-12-09 15:01:17.864618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.780 [2024-12-09 15:01:17.864624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.780 [2024-12-09 15:01:17.864635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.780 [2024-12-09 15:01:17.864640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:39.780 [2024-12-09 15:01:17.864647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.780 [2024-12-09 15:01:17.864653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.922826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.922862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:40.041 [2024-12-09 15:01:17.922870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.922877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.971105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:40.041 [2024-12-09 15:01:17.971113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.971122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.971163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:40.041 [2024-12-09 15:01:17.971170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.971176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.971228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:40.041 [2024-12-09 15:01:17.971235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.971240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.971315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:40.041 [2024-12-09 15:01:17.971321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.971327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.971356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:40.041 [2024-12-09 15:01:17.971363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.971368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.971404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:40.041 [2024-12-09 15:01:17.971410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.971416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.041 [2024-12-09 15:01:17.971455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:40.041 [2024-12-09 15:01:17.971461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.041 [2024-12-09 15:01:17.971467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.041 [2024-12-09 15:01:17.971556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.626 ms, result 0 00:28:41.427 00:28:41.427 00:28:41.427 15:01:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:43.338 15:01:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:43.338 [2024-12-09 15:01:21.331213] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:28:43.338 [2024-12-09 15:01:21.331306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82847 ] 00:28:43.597 [2024-12-09 15:01:21.477769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.597 [2024-12-09 15:01:21.555695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.856 [2024-12-09 15:01:21.766767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:43.856 [2024-12-09 15:01:21.766831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:43.856 [2024-12-09 15:01:21.917950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.856 [2024-12-09 15:01:21.917989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:43.856 [2024-12-09 15:01:21.917999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:43.856 [2024-12-09 15:01:21.918006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.856 [2024-12-09 15:01:21.918041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.856 [2024-12-09 15:01:21.918050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:43.856 [2024-12-09 15:01:21.918056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:43.856 [2024-12-09 15:01:21.918062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.856 [2024-12-09 15:01:21.918074] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:43.856 [2024-12-09 15:01:21.918576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:43.856 [2024-12-09 15:01:21.918593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.856 [2024-12-09 15:01:21.918599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:43.856 [2024-12-09 15:01:21.918605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:28:43.856 [2024-12-09 15:01:21.918611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.856 [2024-12-09 15:01:21.919552] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:43.856 [2024-12-09 15:01:21.929032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.856 [2024-12-09 15:01:21.929063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:43.856 [2024-12-09 15:01:21.929072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.482 ms 00:28:43.856 [2024-12-09 15:01:21.929079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.856 [2024-12-09 15:01:21.929125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.856 [2024-12-09 15:01:21.929133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:43.856 [2024-12-09 15:01:21.929140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:43.856 [2024-12-09 15:01:21.929145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.856 [2024-12-09 15:01:21.933522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.856 [2024-12-09 15:01:21.933548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:43.856 [2024-12-09 15:01:21.933556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:28:43.856 [2024-12-09 15:01:21.933565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.856 [2024-12-09 15:01:21.933618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.856 [2024-12-09 15:01:21.933624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:43.856 [2024-12-09 15:01:21.933631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:43.856 [2024-12-09 15:01:21.933636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.856 [2024-12-09 15:01:21.933668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.857 [2024-12-09 15:01:21.933676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:43.857 [2024-12-09 15:01:21.933682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:43.857 [2024-12-09 15:01:21.933687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.857 [2024-12-09 15:01:21.933703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:43.857 [2024-12-09 15:01:21.936406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.857 [2024-12-09 15:01:21.936431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:43.857 [2024-12-09 15:01:21.936440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:28:43.857 [2024-12-09 15:01:21.936445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.857 [2024-12-09 15:01:21.936474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.857 [2024-12-09 15:01:21.936481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:43.857 [2024-12-09 15:01:21.936487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:43.857 [2024-12-09 15:01:21.936492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.857 [2024-12-09 15:01:21.936505] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:43.857 [2024-12-09 15:01:21.936521] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:43.857 [2024-12-09 15:01:21.936548] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:43.857 [2024-12-09 15:01:21.936561] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:43.857 [2024-12-09 15:01:21.936639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:43.857 [2024-12-09 15:01:21.936651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:43.857 [2024-12-09 15:01:21.936660] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:43.857 [2024-12-09 15:01:21.936668] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:43.857 [2024-12-09 15:01:21.936674] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:43.857 [2024-12-09 15:01:21.936681] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:43.857 [2024-12-09 15:01:21.936686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:43.857 [2024-12-09 15:01:21.936694] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:43.857 [2024-12-09 15:01:21.936699] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:43.857 [2024-12-09 15:01:21.936705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.857 [2024-12-09 15:01:21.936710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:43.857 [2024-12-09 15:01:21.936716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:28:43.857 [2024-12-09 15:01:21.936721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.857 [2024-12-09 15:01:21.936784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.857 [2024-12-09 15:01:21.936790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:43.857 [2024-12-09 15:01:21.936796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:43.857 [2024-12-09 15:01:21.936811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.857 [2024-12-09 15:01:21.936888] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:43.857 [2024-12-09 15:01:21.936895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:43.857 [2024-12-09 15:01:21.936902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:43.857 [2024-12-09 15:01:21.936908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:43.857 [2024-12-09 15:01:21.936913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:43.857 [2024-12-09 15:01:21.936919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:43.857 [2024-12-09 15:01:21.936924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:43.857 [2024-12-09 15:01:21.936929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:43.857 [2024-12-09 15:01:21.936934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:43.857 [2024-12-09 15:01:21.936939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:43.857 [2024-12-09 15:01:21.936944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:43.857 [2024-12-09 15:01:21.936949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:43.857 [2024-12-09 15:01:21.936954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:43.857 [2024-12-09 15:01:21.936964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:43.857 [2024-12-09 15:01:21.936969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:43.857 [2024-12-09 15:01:21.936975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:43.857 [2024-12-09 15:01:21.936980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:43.857 [2024-12-09 15:01:21.936985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:43.857 [2024-12-09 15:01:21.936990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:43.857 [2024-12-09 15:01:21.936995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:43.857 [2024-12-09 15:01:21.937000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:43.857 [2024-12-09 15:01:21.937009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:43.857 [2024-12-09 15:01:21.937014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:43.857 [2024-12-09 15:01:21.937024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:43.857 [2024-12-09 15:01:21.937028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:43.857 [2024-12-09 15:01:21.937038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:43.857 [2024-12-09 15:01:21.937043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:43.857 [2024-12-09 15:01:21.937053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:43.857 [2024-12-09 15:01:21.937058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:43.857 [2024-12-09 15:01:21.937067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:43.857 [2024-12-09 15:01:21.937072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:43.857 [2024-12-09 15:01:21.937077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:43.857 [2024-12-09 15:01:21.937082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:43.857 [2024-12-09 15:01:21.937086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:43.857 [2024-12-09 15:01:21.937091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:43.857 [2024-12-09 15:01:21.937100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:43.857 [2024-12-09 15:01:21.937105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937110] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:43.857 [2024-12-09 15:01:21.937116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:43.857 [2024-12-09 15:01:21.937122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:43.857 [2024-12-09 15:01:21.937127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:43.857 [2024-12-09 15:01:21.937133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:43.857 [2024-12-09 15:01:21.937138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:43.857 [2024-12-09 15:01:21.937143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:43.857 [2024-12-09 15:01:21.937148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:43.857 [2024-12-09 15:01:21.937153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:43.857 [2024-12-09 15:01:21.937157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:43.857 [2024-12-09 15:01:21.937164] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:43.857 [2024-12-09 15:01:21.937170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.857 [2024-12-09 15:01:21.937178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:43.857 [2024-12-09 15:01:21.937184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:43.857 [2024-12-09 15:01:21.937189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:43.857 [2024-12-09 15:01:21.937194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:43.857 [2024-12-09 15:01:21.937200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:43.857 [2024-12-09 15:01:21.937205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:43.857 [2024-12-09 15:01:21.937210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:43.857 [2024-12-09 15:01:21.937215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:43.857 [2024-12-09 15:01:21.937220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:43.857 [2024-12-09 15:01:21.937226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:43.857 [2024-12-09 15:01:21.937231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:43.857 [2024-12-09 15:01:21.937236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:43.857 [2024-12-09 15:01:21.937241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:43.857 [2024-12-09 15:01:21.937247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:43.857 [2024-12-09 15:01:21.937253] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:43.858 [2024-12-09 15:01:21.937259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.858 [2024-12-09 15:01:21.937265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:43.858 [2024-12-09 15:01:21.937270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:43.858 [2024-12-09 15:01:21.937275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:43.858 [2024-12-09 15:01:21.937280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:43.858 [2024-12-09 15:01:21.937285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.858 [2024-12-09 15:01:21.937291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:43.858 [2024-12-09 15:01:21.937297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:28:43.858 [2024-12-09 15:01:21.937303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.858 [2024-12-09 15:01:21.958172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.858 [2024-12-09 15:01:21.958202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:43.858 [2024-12-09 15:01:21.958210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.834 ms 00:28:43.858 [2024-12-09 15:01:21.958218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.858 [2024-12-09 15:01:21.958282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.858 [2024-12-09 15:01:21.958288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:43.858 [2024-12-09 15:01:21.958294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:43.858 [2024-12-09 15:01:21.958300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:21.999317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.117 [2024-12-09 15:01:21.999351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:44.117 [2024-12-09 15:01:21.999360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.978 ms 00:28:44.117 [2024-12-09 15:01:21.999367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:21.999397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.117 [2024-12-09 15:01:21.999404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.117 [2024-12-09 15:01:21.999414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:44.117 [2024-12-09 15:01:21.999420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:21.999736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.117 [2024-12-09 15:01:21.999758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.117 [2024-12-09 15:01:21.999765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:28:44.117 [2024-12-09 15:01:21.999771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:21.999877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.117 [2024-12-09 15:01:21.999886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.117 [2024-12-09 15:01:21.999892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:44.117 [2024-12-09 15:01:21.999903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:22.010389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.117 [2024-12-09 15:01:22.010416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.117 [2024-12-09 15:01:22.010426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.471 ms 00:28:44.117 [2024-12-09 15:01:22.010432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:22.020358] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:44.117 [2024-12-09 15:01:22.020387] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:44.117 [2024-12-09 15:01:22.020396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.117 [2024-12-09 15:01:22.020402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:44.117 [2024-12-09 15:01:22.020409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.884 ms 00:28:44.117 [2024-12-09 15:01:22.020414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:22.038916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.117 [2024-12-09 15:01:22.038948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:44.117 [2024-12-09 15:01:22.038958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.470 ms 00:28:44.117 [2024-12-09 15:01:22.038964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.117 [2024-12-09 15:01:22.047754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.047780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:44.118 [2024-12-09 15:01:22.047787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.760 ms 00:28:44.118 [2024-12-09 15:01:22.047792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.056507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.056533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:44.118 [2024-12-09 15:01:22.056540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.683 ms 00:28:44.118 [2024-12-09 15:01:22.056546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.057012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.057033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:44.118 [2024-12-09 15:01:22.057042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:28:44.118 [2024-12-09 15:01:22.057048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.102767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.102813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:44.118 [2024-12-09 15:01:22.102827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.704 ms 00:28:44.118 [2024-12-09 15:01:22.102834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.110716] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:44.118 [2024-12-09 15:01:22.112674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.112700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:44.118 [2024-12-09 15:01:22.112709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.801 ms 00:28:44.118 [2024-12-09 15:01:22.112716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.112772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.112781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:44.118 [2024-12-09 15:01:22.112790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:44.118 [2024-12-09 15:01:22.112797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.113962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.113989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:44.118 [2024-12-09 15:01:22.113996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:28:44.118 [2024-12-09 15:01:22.114002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.114021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.114028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:44.118 [2024-12-09 15:01:22.114035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:44.118 [2024-12-09 15:01:22.114040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.114079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:44.118 [2024-12-09 15:01:22.114087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.114093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:44.118 [2024-12-09 15:01:22.114099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:44.118 [2024-12-09 15:01:22.114105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.132099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.132127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:44.118 [2024-12-09 15:01:22.132138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.982 ms 00:28:44.118 [2024-12-09 15:01:22.132144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.132197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.118 [2024-12-09 15:01:22.132204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:44.118 [2024-12-09 15:01:22.132210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:44.118 [2024-12-09 15:01:22.132216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.118 [2024-12-09 15:01:22.133005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 214.661 ms, result 0 00:28:45.499  [2024-12-09T15:01:24.565Z] Copying: 1332/1048576 [kB] (1332 kBps) [2024-12-09T15:01:25.504Z] Copying: 6104/1048576 [kB] (4772 kBps) [2024-12-09T15:01:26.448Z] Copying: 34/1024 [MB] (28 MBps) [2024-12-09T15:01:27.394Z] Copying: 62/1024 [MB] (27 MBps) [2024-12-09T15:01:28.339Z] Copying: 87/1024 [MB] (25 MBps) [2024-12-09T15:01:29.280Z] Copying: 113/1024 [MB] (26 MBps) [2024-12-09T15:01:30.669Z] Copying: 156/1024 [MB] (42 MBps) [2024-12-09T15:01:31.613Z] Copying: 185/1024 [MB] (28 MBps) [2024-12-09T15:01:32.558Z] Copying: 215/1024 [MB] (30 MBps) [2024-12-09T15:01:33.499Z] Copying: 244/1024 [MB] (28 MBps) [2024-12-09T15:01:34.442Z] Copying: 282/1024 [MB] (37 MBps) [2024-12-09T15:01:35.443Z] Copying: 309/1024 [MB] (27 MBps) [2024-12-09T15:01:36.386Z] Copying: 333/1024 [MB] (23 MBps) [2024-12-09T15:01:37.328Z] Copying: 363/1024 [MB] (30 MBps) [2024-12-09T15:01:38.272Z] Copying: 394/1024 [MB] (30 MBps) [2024-12-09T15:01:39.661Z] Copying: 423/1024 [MB] (29 MBps) [2024-12-09T15:01:40.602Z] Copying: 447/1024 [MB] (23 MBps) [2024-12-09T15:01:41.545Z] Copying: 476/1024 [MB] (29 MBps) [2024-12-09T15:01:42.486Z] Copying: 506/1024 [MB] (29 MBps) [2024-12-09T15:01:43.431Z] Copying: 533/1024 [MB] (27 MBps) [2024-12-09T15:01:44.374Z] Copying: 564/1024 [MB] (30 MBps) [2024-12-09T15:01:45.317Z] Copying: 593/1024 [MB] (29 MBps) [2024-12-09T15:01:46.706Z] Copying: 615/1024 [MB] (21 MBps) [2024-12-09T15:01:47.280Z] Copying: 645/1024 [MB] (29 MBps) [2024-12-09T15:01:48.667Z] Copying: 664/1024 [MB] (19 MBps) [2024-12-09T15:01:49.610Z] Copying: 698/1024 [MB] (34 MBps) [2024-12-09T15:01:50.554Z] Copying: 728/1024 [MB] (30 MBps) [2024-12-09T15:01:51.496Z] Copying: 754/1024 [MB] (26 MBps) [2024-12-09T15:01:52.440Z] Copying: 787/1024 [MB] (32 MBps) [2024-12-09T15:01:53.384Z] Copying: 815/1024 [MB] (28 MBps) [2024-12-09T15:01:54.325Z] Copying: 846/1024 [MB] (31 MBps) [2024-12-09T15:01:55.277Z] Copying: 874/1024 [MB] (27 MBps) [2024-12-09T15:01:56.664Z] Copying: 902/1024 [MB] (28 MBps) [2024-12-09T15:01:57.606Z] Copying: 934/1024 [MB] (31 MBps) [2024-12-09T15:01:58.549Z] Copying: 964/1024 [MB] (30 MBps) [2024-12-09T15:01:59.492Z] Copying: 994/1024 [MB] (29 MBps) [2024-12-09T15:01:59.492Z] Copying: 1019/1024 [MB] (25 MBps) [2024-12-09T15:02:00.066Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-09 15:01:59.828065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.828482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:21.944 [2024-12-09 15:01:59.828643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:21.944 [2024-12-09 15:01:59.828681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.828748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:21.944 [2024-12-09 15:01:59.833350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.833559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:21.944 [2024-12-09 15:01:59.833652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:29:21.944 [2024-12-09 15:01:59.833684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.834049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.834181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:21.944 [2024-12-09 15:01:59.834265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:29:21.944 [2024-12-09 15:01:59.834300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.848259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.848455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:21.944 [2024-12-09 15:01:59.848475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.915 ms 00:29:21.944 [2024-12-09 15:01:59.848484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.854895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.854971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:21.944 [2024-12-09 15:01:59.854989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.354 ms 00:29:21.944 [2024-12-09 15:01:59.854997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.882583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.882638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:21.944 [2024-12-09 15:01:59.882652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.539 ms 00:29:21.944 [2024-12-09 15:01:59.882661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.899686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.899736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:21.944 [2024-12-09 15:01:59.899750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.973 ms 00:29:21.944 [2024-12-09 15:01:59.899759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.904449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.904504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:21.944 [2024-12-09 15:01:59.904516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.614 ms 00:29:21.944 [2024-12-09 15:01:59.904535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.931610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.931662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:21.944 [2024-12-09 15:01:59.931676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.058 ms 00:29:21.944 [2024-12-09 15:01:59.931683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.957666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.957718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:21.944 [2024-12-09 15:01:59.957730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.933 ms 00:29:21.944 [2024-12-09 15:01:59.957738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:01:59.983473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:01:59.983524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:21.944 [2024-12-09 15:01:59.983536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.686 ms 00:29:21.944 [2024-12-09 15:01:59.983543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:02:00.008850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.944 [2024-12-09 15:02:00.008904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:21.944 [2024-12-09 15:02:00.008917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.214 ms 00:29:21.944 [2024-12-09 15:02:00.008925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.944 [2024-12-09 15:02:00.008972] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:21.944 [2024-12-09 15:02:00.008990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:21.944 [2024-12-09 15:02:00.009003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:21.944 [2024-12-09 15:02:00.009012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:21.944 [2024-12-09 15:02:00.009215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:21.945 [2024-12-09 15:02:00.009817] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:21.945 [2024-12-09 15:02:00.009826] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d14e84ea-2831-42e5-b340-abc80d689c33 00:29:21.945 [2024-12-09 15:02:00.009835] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:21.945 [2024-12-09 15:02:00.009843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 145600 00:29:21.945 [2024-12-09 15:02:00.009853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 143616 00:29:21.945 [2024-12-09 15:02:00.009863] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0138 00:29:21.945 [2024-12-09 15:02:00.009872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:21.945 [2024-12-09 15:02:00.009887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:21.945 [2024-12-09 15:02:00.009895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:21.945 [2024-12-09 15:02:00.009902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:21.945 [2024-12-09 15:02:00.009909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:21.945 [2024-12-09 15:02:00.009918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.945 [2024-12-09 15:02:00.009926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:21.945 [2024-12-09 15:02:00.009936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:29:21.945 [2024-12-09 15:02:00.009944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.945 [2024-12-09 15:02:00.023825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.945 [2024-12-09 15:02:00.023870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:21.945 [2024-12-09 15:02:00.023882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.860 ms 00:29:21.945 [2024-12-09 15:02:00.023890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.945 [2024-12-09 15:02:00.024298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.945 [2024-12-09 15:02:00.024318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:21.945 [2024-12-09 15:02:00.024328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:29:21.945 [2024-12-09 15:02:00.024336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.945 [2024-12-09 15:02:00.061130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.946 [2024-12-09 15:02:00.061186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:21.946 [2024-12-09 15:02:00.061198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.946 [2024-12-09 15:02:00.061206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.946 [2024-12-09 15:02:00.061263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.946 [2024-12-09 15:02:00.061272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:21.946 [2024-12-09 15:02:00.061281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.946 [2024-12-09 15:02:00.061289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.946 [2024-12-09 15:02:00.061389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.946 [2024-12-09 15:02:00.061400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:21.946 [2024-12-09 15:02:00.061409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.946 [2024-12-09 15:02:00.061416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.946 [2024-12-09 15:02:00.061433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.946 [2024-12-09 15:02:00.061442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:21.946 [2024-12-09 15:02:00.061450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.946 [2024-12-09 15:02:00.061457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.145924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.145990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:22.206 [2024-12-09 15:02:00.146004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.146014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.215455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.215519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:22.206 [2024-12-09 15:02:00.215532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.215541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.215600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.215617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:22.206 [2024-12-09 15:02:00.215626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.215635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.215701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.215712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:22.206 [2024-12-09 15:02:00.215721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.215729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.215851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.215863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:22.206 [2024-12-09 15:02:00.215876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.215884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.215917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.215927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:22.206 [2024-12-09 15:02:00.215936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.215944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.215985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.215995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:22.206 [2024-12-09 15:02:00.216007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.216016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.216065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.206 [2024-12-09 15:02:00.216076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:22.206 [2024-12-09 15:02:00.216085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.206 [2024-12-09 15:02:00.216093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.206 [2024-12-09 15:02:00.216230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.143 ms, result 0 00:29:23.148 00:29:23.148 00:29:23.148 15:02:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:25.054 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:25.054 15:02:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:25.054 [2024-12-09 15:02:03.001889] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:29:25.054 [2024-12-09 15:02:03.002158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83267 ] 00:29:25.054 [2024-12-09 15:02:03.162180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.315 [2024-12-09 15:02:03.289632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.577 [2024-12-09 15:02:03.587678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:25.577 [2024-12-09 15:02:03.587772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:25.840 [2024-12-09 15:02:03.749697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.749769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:25.840 [2024-12-09 15:02:03.749785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:25.840 [2024-12-09 15:02:03.749794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.749877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.749891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:25.840 [2024-12-09 15:02:03.749901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:25.840 [2024-12-09 15:02:03.749909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.749951] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:25.840 [2024-12-09 15:02:03.751179] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:25.840 [2024-12-09 15:02:03.751238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.751249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:25.840 [2024-12-09 15:02:03.751260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:29:25.840 [2024-12-09 15:02:03.751269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.753084] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:25.840 [2024-12-09 15:02:03.767698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.767753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:25.840 [2024-12-09 15:02:03.767767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.616 ms 00:29:25.840 [2024-12-09 15:02:03.767775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.767888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.767901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:25.840 [2024-12-09 15:02:03.767911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:25.840 [2024-12-09 15:02:03.767919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.776365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.776413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:25.840 [2024-12-09 15:02:03.776423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.356 ms 00:29:25.840 [2024-12-09 15:02:03.776437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.776518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.776527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:25.840 [2024-12-09 15:02:03.776536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:25.840 [2024-12-09 15:02:03.776544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.776590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.840 [2024-12-09 15:02:03.776600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:25.840 [2024-12-09 15:02:03.776609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:25.840 [2024-12-09 15:02:03.776617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.840 [2024-12-09 15:02:03.776644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:25.840 [2024-12-09 15:02:03.780813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.841 [2024-12-09 15:02:03.780853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:25.841 [2024-12-09 15:02:03.780866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.175 ms 00:29:25.841 [2024-12-09 15:02:03.780875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.841 [2024-12-09 15:02:03.780917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.841 [2024-12-09 15:02:03.780927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:25.841 [2024-12-09 15:02:03.780935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:25.841 [2024-12-09 15:02:03.780943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.841 [2024-12-09 15:02:03.780999] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:25.841 [2024-12-09 15:02:03.781025] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:25.841 [2024-12-09 15:02:03.781062] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:25.841 [2024-12-09 15:02:03.781082] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:25.841 [2024-12-09 15:02:03.781189] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:25.841 [2024-12-09 15:02:03.781200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:25.841 [2024-12-09 15:02:03.781211] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:25.841 [2024-12-09 15:02:03.781222] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781231] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781240] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:25.841 [2024-12-09 15:02:03.781248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:25.841 [2024-12-09 15:02:03.781259] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:25.841 [2024-12-09 15:02:03.781266] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:25.841 [2024-12-09 15:02:03.781274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.841 [2024-12-09 15:02:03.781281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:25.841 [2024-12-09 15:02:03.781289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:29:25.841 [2024-12-09 15:02:03.781297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.841 [2024-12-09 15:02:03.781379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.841 [2024-12-09 15:02:03.781388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:25.841 [2024-12-09 15:02:03.781396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:25.841 [2024-12-09 15:02:03.781403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.841 [2024-12-09 15:02:03.781511] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:25.841 [2024-12-09 15:02:03.781522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:25.841 [2024-12-09 15:02:03.781530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:25.841 [2024-12-09 15:02:03.781554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:25.841 [2024-12-09 15:02:03.781576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:25.841 [2024-12-09 15:02:03.781592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:25.841 [2024-12-09 15:02:03.781600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:25.841 [2024-12-09 15:02:03.781607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:25.841 [2024-12-09 15:02:03.781621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:25.841 [2024-12-09 15:02:03.781629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:25.841 [2024-12-09 15:02:03.781635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:25.841 [2024-12-09 15:02:03.781649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:25.841 [2024-12-09 15:02:03.781671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:25.841 [2024-12-09 15:02:03.781692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:25.841 [2024-12-09 15:02:03.781712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:25.841 [2024-12-09 15:02:03.781732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:25.841 [2024-12-09 15:02:03.781753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:25.841 [2024-12-09 15:02:03.781767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:25.841 [2024-12-09 15:02:03.781774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:25.841 [2024-12-09 15:02:03.781780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:25.841 [2024-12-09 15:02:03.781787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:25.841 [2024-12-09 15:02:03.781793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:25.841 [2024-12-09 15:02:03.781821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:25.841 [2024-12-09 15:02:03.781836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:25.841 [2024-12-09 15:02:03.781845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781852] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:25.841 [2024-12-09 15:02:03.781861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:25.841 [2024-12-09 15:02:03.781869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.841 [2024-12-09 15:02:03.781886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:25.841 [2024-12-09 15:02:03.781894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:25.841 [2024-12-09 15:02:03.781902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:25.841 [2024-12-09 15:02:03.781910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:25.841 [2024-12-09 15:02:03.781917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:25.841 [2024-12-09 15:02:03.781924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:25.841 [2024-12-09 15:02:03.781934] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:25.841 [2024-12-09 15:02:03.781943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.841 [2024-12-09 15:02:03.781955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:25.841 [2024-12-09 15:02:03.781962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:25.841 [2024-12-09 15:02:03.781969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:25.841 [2024-12-09 15:02:03.781977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:25.841 [2024-12-09 15:02:03.781984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:25.841 [2024-12-09 15:02:03.781991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:25.841 [2024-12-09 15:02:03.781997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:25.841 [2024-12-09 15:02:03.782004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:25.841 [2024-12-09 15:02:03.782011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:25.841 [2024-12-09 15:02:03.782018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:25.841 [2024-12-09 15:02:03.782025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:25.841 [2024-12-09 15:02:03.782032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:25.841 [2024-12-09 15:02:03.782039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:25.841 [2024-12-09 15:02:03.782046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:25.841 [2024-12-09 15:02:03.782053] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:25.841 [2024-12-09 15:02:03.782061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.841 [2024-12-09 15:02:03.782068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:25.841 [2024-12-09 15:02:03.782076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:25.842 [2024-12-09 15:02:03.782083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:25.842 [2024-12-09 15:02:03.782092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:25.842 [2024-12-09 15:02:03.782102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.782110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:25.842 [2024-12-09 15:02:03.782118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:29:25.842 [2024-12-09 15:02:03.782125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.814648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.814705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:25.842 [2024-12-09 15:02:03.814716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.476 ms 00:29:25.842 [2024-12-09 15:02:03.814728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.814843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.814853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:25.842 [2024-12-09 15:02:03.814862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:25.842 [2024-12-09 15:02:03.814869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.861699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.861757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:25.842 [2024-12-09 15:02:03.861770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.765 ms 00:29:25.842 [2024-12-09 15:02:03.861779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.861846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.861857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:25.842 [2024-12-09 15:02:03.861871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:25.842 [2024-12-09 15:02:03.861880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.862506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.862548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:25.842 [2024-12-09 15:02:03.862560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:29:25.842 [2024-12-09 15:02:03.862568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.862729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.862740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:25.842 [2024-12-09 15:02:03.862754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:29:25.842 [2024-12-09 15:02:03.862762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.879069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.879117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:25.842 [2024-12-09 15:02:03.879130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.286 ms 00:29:25.842 [2024-12-09 15:02:03.879138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.893617] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:25.842 [2024-12-09 15:02:03.893674] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:25.842 [2024-12-09 15:02:03.893688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.893696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:25.842 [2024-12-09 15:02:03.893707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.434 ms 00:29:25.842 [2024-12-09 15:02:03.893714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.919906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.919958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:25.842 [2024-12-09 15:02:03.919971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.132 ms 00:29:25.842 [2024-12-09 15:02:03.919979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.933418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.933470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:25.842 [2024-12-09 15:02:03.933483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.362 ms 00:29:25.842 [2024-12-09 15:02:03.933491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.946612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.946661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:25.842 [2024-12-09 15:02:03.946674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.069 ms 00:29:25.842 [2024-12-09 15:02:03.946681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.842 [2024-12-09 15:02:03.947398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.842 [2024-12-09 15:02:03.947428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:25.842 [2024-12-09 15:02:03.947441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:29:25.842 [2024-12-09 15:02:03.947448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.014784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.103 [2024-12-09 15:02:04.014868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:26.103 [2024-12-09 15:02:04.014891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.314 ms 00:29:26.103 [2024-12-09 15:02:04.014901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.026354] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:26.103 [2024-12-09 15:02:04.029936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.103 [2024-12-09 15:02:04.029986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:26.103 [2024-12-09 15:02:04.030001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.960 ms 00:29:26.103 [2024-12-09 15:02:04.030010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.030103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.103 [2024-12-09 15:02:04.030114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:26.103 [2024-12-09 15:02:04.030127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:26.103 [2024-12-09 15:02:04.030136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.031061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.103 [2024-12-09 15:02:04.031107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:26.103 [2024-12-09 15:02:04.031119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:29:26.103 [2024-12-09 15:02:04.031129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.031160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.103 [2024-12-09 15:02:04.031170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:26.103 [2024-12-09 15:02:04.031180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:26.103 [2024-12-09 15:02:04.031190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.031238] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:26.103 [2024-12-09 15:02:04.031251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.103 [2024-12-09 15:02:04.031261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:26.103 [2024-12-09 15:02:04.031271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:26.103 [2024-12-09 15:02:04.031280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.057937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.103 [2024-12-09 15:02:04.057990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:26.103 [2024-12-09 15:02:04.058010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.635 ms 00:29:26.103 [2024-12-09 15:02:04.058018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.103 [2024-12-09 15:02:04.058107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.104 [2024-12-09 15:02:04.058119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:26.104 [2024-12-09 15:02:04.058129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:26.104 [2024-12-09 15:02:04.058138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.104 [2024-12-09 15:02:04.059612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.421 ms, result 0 00:29:27.493  [2024-12-09T15:02:06.557Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-09T15:02:07.499Z] Copying: 35/1024 [MB] (19 MBps) [2024-12-09T15:02:08.448Z] Copying: 53/1024 [MB] (17 MBps) [2024-12-09T15:02:09.411Z] Copying: 70/1024 [MB] (17 MBps) [2024-12-09T15:02:10.390Z] Copying: 92/1024 [MB] (21 MBps) [2024-12-09T15:02:11.333Z] Copying: 112/1024 [MB] (20 MBps) [2024-12-09T15:02:12.277Z] Copying: 129/1024 [MB] (16 MBps) [2024-12-09T15:02:13.664Z] Copying: 149/1024 [MB] (19 MBps) [2024-12-09T15:02:14.610Z] Copying: 162/1024 [MB] (12 MBps) [2024-12-09T15:02:15.555Z] Copying: 172/1024 [MB] (10 MBps) [2024-12-09T15:02:16.501Z] Copying: 185/1024 [MB] (12 MBps) [2024-12-09T15:02:17.446Z] Copying: 197/1024 [MB] (11 MBps) [2024-12-09T15:02:18.389Z] Copying: 210/1024 [MB] (12 MBps) [2024-12-09T15:02:19.334Z] Copying: 232/1024 [MB] (21 MBps) [2024-12-09T15:02:20.279Z] Copying: 251/1024 [MB] (19 MBps) [2024-12-09T15:02:21.666Z] Copying: 271/1024 [MB] (20 MBps) [2024-12-09T15:02:22.239Z] Copying: 290/1024 [MB] (18 MBps) [2024-12-09T15:02:23.627Z] Copying: 310/1024 [MB] (20 MBps) [2024-12-09T15:02:24.571Z] Copying: 327/1024 [MB] (16 MBps) [2024-12-09T15:02:25.515Z] Copying: 339/1024 [MB] (12 MBps) [2024-12-09T15:02:26.457Z] Copying: 355/1024 [MB] (15 MBps) [2024-12-09T15:02:27.403Z] Copying: 370/1024 [MB] (15 MBps) [2024-12-09T15:02:28.349Z] Copying: 391/1024 [MB] (20 MBps) [2024-12-09T15:02:29.294Z] Copying: 408/1024 [MB] (16 MBps) [2024-12-09T15:02:30.240Z] Copying: 418/1024 [MB] (10 MBps) [2024-12-09T15:02:31.630Z] Copying: 429/1024 [MB] (10 MBps) [2024-12-09T15:02:32.574Z] Copying: 439/1024 [MB] (10 MBps) [2024-12-09T15:02:33.518Z] Copying: 449/1024 [MB] (10 MBps) [2024-12-09T15:02:34.465Z] Copying: 468/1024 [MB] (18 MBps) [2024-12-09T15:02:35.410Z] Copying: 479/1024 [MB] (10 MBps) [2024-12-09T15:02:36.356Z] Copying: 489/1024 [MB] (10 MBps) [2024-12-09T15:02:37.299Z] Copying: 499/1024 [MB] (10 MBps) [2024-12-09T15:02:38.241Z] Copying: 510/1024 [MB] (10 MBps) [2024-12-09T15:02:39.627Z] Copying: 524/1024 [MB] (13 MBps) [2024-12-09T15:02:40.570Z] Copying: 534/1024 [MB] (10 MBps) [2024-12-09T15:02:41.514Z] Copying: 549/1024 [MB] (14 MBps) [2024-12-09T15:02:42.457Z] Copying: 567/1024 [MB] (17 MBps) [2024-12-09T15:02:43.401Z] Copying: 584/1024 [MB] (16 MBps) [2024-12-09T15:02:44.450Z] Copying: 601/1024 [MB] (17 MBps) [2024-12-09T15:02:45.393Z] Copying: 620/1024 [MB] (18 MBps) [2024-12-09T15:02:46.337Z] Copying: 636/1024 [MB] (16 MBps) [2024-12-09T15:02:47.283Z] Copying: 651/1024 [MB] (15 MBps) [2024-12-09T15:02:48.673Z] Copying: 669/1024 [MB] (17 MBps) [2024-12-09T15:02:49.247Z] Copying: 689/1024 [MB] (19 MBps) [2024-12-09T15:02:50.636Z] Copying: 699/1024 [MB] (10 MBps) [2024-12-09T15:02:51.580Z] Copying: 709/1024 [MB] (10 MBps) [2024-12-09T15:02:52.522Z] Copying: 720/1024 [MB] (10 MBps) [2024-12-09T15:02:53.464Z] Copying: 732/1024 [MB] (11 MBps) [2024-12-09T15:02:54.408Z] Copying: 744/1024 [MB] (12 MBps) [2024-12-09T15:02:55.352Z] Copying: 761/1024 [MB] (16 MBps) [2024-12-09T15:02:56.297Z] Copying: 776/1024 [MB] (14 MBps) [2024-12-09T15:02:57.242Z] Copying: 791/1024 [MB] (15 MBps) [2024-12-09T15:02:58.627Z] Copying: 804/1024 [MB] (12 MBps) [2024-12-09T15:02:59.572Z] Copying: 828/1024 [MB] (24 MBps) [2024-12-09T15:03:00.518Z] Copying: 841/1024 [MB] (13 MBps) [2024-12-09T15:03:01.459Z] Copying: 856/1024 [MB] (14 MBps) [2024-12-09T15:03:02.403Z] Copying: 866/1024 [MB] (10 MBps) [2024-12-09T15:03:03.348Z] Copying: 878/1024 [MB] (11 MBps) [2024-12-09T15:03:04.290Z] Copying: 889/1024 [MB] (11 MBps) [2024-12-09T15:03:05.672Z] Copying: 906/1024 [MB] (16 MBps) [2024-12-09T15:03:06.242Z] Copying: 916/1024 [MB] (10 MBps) [2024-12-09T15:03:07.629Z] Copying: 927/1024 [MB] (10 MBps) [2024-12-09T15:03:08.570Z] Copying: 938/1024 [MB] (10 MBps) [2024-12-09T15:03:09.510Z] Copying: 949/1024 [MB] (11 MBps) [2024-12-09T15:03:10.454Z] Copying: 973/1024 [MB] (23 MBps) [2024-12-09T15:03:11.399Z] Copying: 986/1024 [MB] (12 MBps) [2024-12-09T15:03:12.342Z] Copying: 1002/1024 [MB] (16 MBps) [2024-12-09T15:03:12.342Z] Copying: 1023/1024 [MB] (20 MBps) [2024-12-09T15:03:12.342Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-09 15:03:12.328103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.220 [2024-12-09 15:03:12.328475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:34.220 [2024-12-09 15:03:12.328597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:34.220 [2024-12-09 15:03:12.328630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.220 [2024-12-09 15:03:12.328685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:34.220 [2024-12-09 15:03:12.332611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.220 [2024-12-09 15:03:12.332819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:34.220 [2024-12-09 15:03:12.334499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:30:34.220 [2024-12-09 15:03:12.334674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.220 [2024-12-09 15:03:12.335062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.220 [2024-12-09 15:03:12.335206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:34.220 [2024-12-09 15:03:12.335286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:30:34.220 [2024-12-09 15:03:12.335314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.220 [2024-12-09 15:03:12.339457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.220 [2024-12-09 15:03:12.339572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:34.220 [2024-12-09 15:03:12.339633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:30:34.220 [2024-12-09 15:03:12.339665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.482 [2024-12-09 15:03:12.345892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.482 [2024-12-09 15:03:12.346054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:34.482 [2024-12-09 15:03:12.346122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:30:34.482 [2024-12-09 15:03:12.346147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.482 [2024-12-09 15:03:12.373833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.482 [2024-12-09 15:03:12.374020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:34.482 [2024-12-09 15:03:12.374084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.569 ms 00:30:34.482 [2024-12-09 15:03:12.374106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.482 [2024-12-09 15:03:12.391268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.482 [2024-12-09 15:03:12.391477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:34.482 [2024-12-09 15:03:12.391555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.849 ms 00:30:34.482 [2024-12-09 15:03:12.391581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.482 [2024-12-09 15:03:12.396580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.482 [2024-12-09 15:03:12.396740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:34.482 [2024-12-09 15:03:12.396815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:30:34.482 [2024-12-09 15:03:12.396841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.482 [2024-12-09 15:03:12.423934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.482 [2024-12-09 15:03:12.424109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:34.482 [2024-12-09 15:03:12.424166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.059 ms 00:30:34.482 [2024-12-09 15:03:12.424188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.482 [2024-12-09 15:03:12.450695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.482 [2024-12-09 15:03:12.450894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:34.482 [2024-12-09 15:03:12.450988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.356 ms 00:30:34.483 [2024-12-09 15:03:12.451013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.483 [2024-12-09 15:03:12.476572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.483 [2024-12-09 15:03:12.476742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:34.483 [2024-12-09 15:03:12.476764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.507 ms 00:30:34.483 [2024-12-09 15:03:12.476772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.483 [2024-12-09 15:03:12.502715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.483 [2024-12-09 15:03:12.502903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:34.483 [2024-12-09 15:03:12.502982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.831 ms 00:30:34.483 [2024-12-09 15:03:12.503007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.483 [2024-12-09 15:03:12.503059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:34.483 [2024-12-09 15:03:12.503098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:34.483 [2024-12-09 15:03:12.503135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:34.483 [2024-12-09 15:03:12.503164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.503918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:34.483 [2024-12-09 15:03:12.504969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.504978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.504986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.504994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:34.484 [2024-12-09 15:03:12.505137] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:34.484 [2024-12-09 15:03:12.505145] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d14e84ea-2831-42e5-b340-abc80d689c33 00:30:34.484 [2024-12-09 15:03:12.505154] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:34.484 [2024-12-09 15:03:12.505162] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:34.484 [2024-12-09 15:03:12.505169] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:34.484 [2024-12-09 15:03:12.505178] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:34.484 [2024-12-09 15:03:12.505197] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:34.484 [2024-12-09 15:03:12.505206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:34.484 [2024-12-09 15:03:12.505213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:34.484 [2024-12-09 15:03:12.505220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:34.484 [2024-12-09 15:03:12.505227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:34.484 [2024-12-09 15:03:12.505237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.484 [2024-12-09 15:03:12.505246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:34.484 [2024-12-09 15:03:12.505257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:30:34.484 [2024-12-09 15:03:12.505269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.484 [2024-12-09 15:03:12.518851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.484 [2024-12-09 15:03:12.518901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:34.484 [2024-12-09 15:03:12.518913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.529 ms 00:30:34.484 [2024-12-09 15:03:12.518923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.484 [2024-12-09 15:03:12.519345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.484 [2024-12-09 15:03:12.519372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:34.484 [2024-12-09 15:03:12.519383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:30:34.484 [2024-12-09 15:03:12.519391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.484 [2024-12-09 15:03:12.556183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.484 [2024-12-09 15:03:12.556232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:34.484 [2024-12-09 15:03:12.556245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.484 [2024-12-09 15:03:12.556254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.484 [2024-12-09 15:03:12.556328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.484 [2024-12-09 15:03:12.556344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:34.484 [2024-12-09 15:03:12.556354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.484 [2024-12-09 15:03:12.556362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.484 [2024-12-09 15:03:12.556457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.484 [2024-12-09 15:03:12.556470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:34.484 [2024-12-09 15:03:12.556480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.484 [2024-12-09 15:03:12.556489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.484 [2024-12-09 15:03:12.556507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.484 [2024-12-09 15:03:12.556516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:34.484 [2024-12-09 15:03:12.556528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.484 [2024-12-09 15:03:12.556538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.642648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.642708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:34.746 [2024-12-09 15:03:12.642722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.642731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.712281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:34.746 [2024-12-09 15:03:12.712294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.712302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.712371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:34.746 [2024-12-09 15:03:12.712380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.712388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.712456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:34.746 [2024-12-09 15:03:12.712466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.712479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.712590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:34.746 [2024-12-09 15:03:12.712599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.712607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.712651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:34.746 [2024-12-09 15:03:12.712660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.712668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.712726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:34.746 [2024-12-09 15:03:12.712736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.712744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.746 [2024-12-09 15:03:12.712832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:34.746 [2024-12-09 15:03:12.712842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.746 [2024-12-09 15:03:12.712854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.746 [2024-12-09 15:03:12.712991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.856 ms, result 0 00:30:35.688 00:30:35.688 00:30:35.688 15:03:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:37.612 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:37.612 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:37.612 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:37.612 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:37.612 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81549 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81549 ']' 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81549 00:30:37.871 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81549) - No such process 00:30:37.871 Process with pid 81549 is not found 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81549 is not found' 00:30:37.871 15:03:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:38.442 Remove shared memory files 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:38.442 00:30:38.442 real 3m55.204s 00:30:38.442 user 4m21.778s 00:30:38.442 sys 0m26.884s 00:30:38.442 ************************************ 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.442 15:03:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:38.442 END TEST ftl_dirty_shutdown 00:30:38.442 ************************************ 00:30:38.442 15:03:16 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:38.442 15:03:16 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.442 15:03:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.442 15:03:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:38.442 ************************************ 00:30:38.442 START TEST ftl_upgrade_shutdown 00:30:38.442 ************************************ 00:30:38.442 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:38.442 * Looking for test storage... 00:30:38.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:38.442 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.442 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.442 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.442 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.442 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.442 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.443 --rc genhtml_branch_coverage=1 00:30:38.443 --rc genhtml_function_coverage=1 00:30:38.443 --rc genhtml_legend=1 00:30:38.443 --rc geninfo_all_blocks=1 00:30:38.443 --rc geninfo_unexecuted_blocks=1 00:30:38.443 00:30:38.443 ' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.443 --rc genhtml_branch_coverage=1 00:30:38.443 --rc genhtml_function_coverage=1 00:30:38.443 --rc genhtml_legend=1 00:30:38.443 --rc geninfo_all_blocks=1 00:30:38.443 --rc geninfo_unexecuted_blocks=1 00:30:38.443 00:30:38.443 ' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.443 --rc genhtml_branch_coverage=1 00:30:38.443 --rc genhtml_function_coverage=1 00:30:38.443 --rc genhtml_legend=1 00:30:38.443 --rc geninfo_all_blocks=1 00:30:38.443 --rc geninfo_unexecuted_blocks=1 00:30:38.443 00:30:38.443 ' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.443 --rc genhtml_branch_coverage=1 00:30:38.443 --rc genhtml_function_coverage=1 00:30:38.443 --rc genhtml_legend=1 00:30:38.443 --rc geninfo_all_blocks=1 00:30:38.443 --rc geninfo_unexecuted_blocks=1 00:30:38.443 00:30:38.443 ' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84070 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84070 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84070 ']' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.443 15:03:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:38.705 [2024-12-09 15:03:16.592582] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:30:38.705 [2024-12-09 15:03:16.592733] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84070 ] 00:30:38.705 [2024-12-09 15:03:16.754078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.965 [2024-12-09 15:03:16.876043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:39.536 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:39.797 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:40.086 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:40.086 { 00:30:40.086 "name": "basen1", 00:30:40.086 "aliases": [ 00:30:40.086 "4850451d-07b4-446a-8d45-ef847bb9ca09" 00:30:40.086 ], 00:30:40.086 "product_name": "NVMe disk", 00:30:40.086 "block_size": 4096, 00:30:40.086 "num_blocks": 1310720, 00:30:40.086 "uuid": "4850451d-07b4-446a-8d45-ef847bb9ca09", 00:30:40.086 "numa_id": -1, 00:30:40.086 "assigned_rate_limits": { 00:30:40.086 "rw_ios_per_sec": 0, 00:30:40.086 "rw_mbytes_per_sec": 0, 00:30:40.086 "r_mbytes_per_sec": 0, 00:30:40.086 "w_mbytes_per_sec": 0 00:30:40.086 }, 00:30:40.086 "claimed": true, 00:30:40.086 "claim_type": "read_many_write_one", 00:30:40.086 "zoned": false, 00:30:40.086 "supported_io_types": { 00:30:40.086 "read": true, 00:30:40.086 "write": true, 00:30:40.086 "unmap": true, 00:30:40.086 "flush": true, 00:30:40.086 "reset": true, 00:30:40.086 "nvme_admin": true, 00:30:40.086 "nvme_io": true, 00:30:40.086 "nvme_io_md": false, 00:30:40.086 "write_zeroes": true, 00:30:40.086 "zcopy": false, 00:30:40.086 "get_zone_info": false, 00:30:40.086 "zone_management": false, 00:30:40.086 "zone_append": false, 00:30:40.086 "compare": true, 00:30:40.086 "compare_and_write": false, 00:30:40.086 "abort": true, 00:30:40.086 "seek_hole": false, 00:30:40.086 "seek_data": false, 00:30:40.086 "copy": true, 00:30:40.086 "nvme_iov_md": false 00:30:40.086 }, 00:30:40.086 "driver_specific": { 00:30:40.086 "nvme": [ 00:30:40.086 { 00:30:40.086 "pci_address": "0000:00:11.0", 00:30:40.086 "trid": { 00:30:40.086 "trtype": "PCIe", 00:30:40.086 "traddr": "0000:00:11.0" 00:30:40.086 }, 00:30:40.086 "ctrlr_data": { 00:30:40.086 "cntlid": 0, 00:30:40.086 "vendor_id": "0x1b36", 00:30:40.086 "model_number": "QEMU NVMe Ctrl", 00:30:40.086 "serial_number": "12341", 00:30:40.086 "firmware_revision": "8.0.0", 00:30:40.086 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:40.086 "oacs": { 00:30:40.086 "security": 0, 00:30:40.086 "format": 1, 00:30:40.086 "firmware": 0, 00:30:40.086 "ns_manage": 1 00:30:40.086 }, 00:30:40.086 "multi_ctrlr": false, 00:30:40.086 "ana_reporting": false 00:30:40.086 }, 00:30:40.086 "vs": { 00:30:40.086 "nvme_version": "1.4" 00:30:40.086 }, 00:30:40.086 "ns_data": { 00:30:40.086 "id": 1, 00:30:40.086 "can_share": false 00:30:40.086 } 00:30:40.086 } 00:30:40.086 ], 00:30:40.086 "mp_policy": "active_passive" 00:30:40.086 } 00:30:40.086 } 00:30:40.086 ]' 00:30:40.086 15:03:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=18e2036b-c644-48b5-a813-d1bd0f43e2b9 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:40.086 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18e2036b-c644-48b5-a813-d1bd0f43e2b9 00:30:40.385 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:40.644 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=66ba270f-d813-4acc-b2e4-b3d05888c23f 00:30:40.644 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 66ba270f-d813-4acc-b2e4-b3d05888c23f 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=e49ab8da-b8a5-487c-a039-76f43ad4a846 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z e49ab8da-b8a5-487c-a039-76f43ad4a846 ]] 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 e49ab8da-b8a5-487c-a039-76f43ad4a846 5120 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=e49ab8da-b8a5-487c-a039-76f43ad4a846 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size e49ab8da-b8a5-487c-a039-76f43ad4a846 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=e49ab8da-b8a5-487c-a039-76f43ad4a846 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:40.905 15:03:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e49ab8da-b8a5-487c-a039-76f43ad4a846 00:30:40.905 15:03:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:40.905 { 00:30:40.905 "name": "e49ab8da-b8a5-487c-a039-76f43ad4a846", 00:30:40.905 "aliases": [ 00:30:40.905 "lvs/basen1p0" 00:30:40.905 ], 00:30:40.905 "product_name": "Logical Volume", 00:30:40.905 "block_size": 4096, 00:30:40.905 "num_blocks": 5242880, 00:30:40.905 "uuid": "e49ab8da-b8a5-487c-a039-76f43ad4a846", 00:30:40.905 "assigned_rate_limits": { 00:30:40.905 "rw_ios_per_sec": 0, 00:30:40.905 "rw_mbytes_per_sec": 0, 00:30:40.905 "r_mbytes_per_sec": 0, 00:30:40.905 "w_mbytes_per_sec": 0 00:30:40.905 }, 00:30:40.905 "claimed": false, 00:30:40.905 "zoned": false, 00:30:40.905 "supported_io_types": { 00:30:40.905 "read": true, 00:30:40.905 "write": true, 00:30:40.905 "unmap": true, 00:30:40.905 "flush": false, 00:30:40.905 "reset": true, 00:30:40.905 "nvme_admin": false, 00:30:40.905 "nvme_io": false, 00:30:40.905 "nvme_io_md": false, 00:30:40.905 "write_zeroes": true, 00:30:40.905 "zcopy": false, 00:30:40.905 "get_zone_info": false, 00:30:40.905 "zone_management": false, 00:30:40.905 "zone_append": false, 00:30:40.905 "compare": false, 00:30:40.905 "compare_and_write": false, 00:30:40.905 "abort": false, 00:30:40.905 "seek_hole": true, 00:30:40.905 "seek_data": true, 00:30:40.905 "copy": false, 00:30:40.905 "nvme_iov_md": false 00:30:40.905 }, 00:30:40.905 "driver_specific": { 00:30:40.905 "lvol": { 00:30:40.905 "lvol_store_uuid": "66ba270f-d813-4acc-b2e4-b3d05888c23f", 00:30:40.905 "base_bdev": "basen1", 00:30:40.905 "thin_provision": true, 00:30:40.905 "num_allocated_clusters": 0, 00:30:40.905 "snapshot": false, 00:30:40.905 "clone": false, 00:30:40.905 "esnap_clone": false 00:30:40.905 } 00:30:40.905 } 00:30:40.905 } 00:30:40.905 ]' 00:30:40.905 15:03:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:41.167 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:41.427 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:41.428 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:41.428 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:41.688 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:41.688 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:41.688 15:03:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d e49ab8da-b8a5-487c-a039-76f43ad4a846 -c cachen1p0 --l2p_dram_limit 2 00:30:41.688 [2024-12-09 15:03:19.782456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.688 [2024-12-09 15:03:19.782528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:41.688 [2024-12-09 15:03:19.782547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:41.688 [2024-12-09 15:03:19.782556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.688 [2024-12-09 15:03:19.782635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.688 [2024-12-09 15:03:19.782646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:41.688 [2024-12-09 15:03:19.782657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:30:41.688 [2024-12-09 15:03:19.782666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.688 [2024-12-09 15:03:19.782689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:41.688 [2024-12-09 15:03:19.783529] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:41.688 [2024-12-09 15:03:19.783566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.688 [2024-12-09 15:03:19.783575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:41.688 [2024-12-09 15:03:19.783590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.879 ms 00:30:41.688 [2024-12-09 15:03:19.783598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.688 [2024-12-09 15:03:19.783689] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 684f461e-728a-44da-8725-22aee97c18c0 00:30:41.688 [2024-12-09 15:03:19.785461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.688 [2024-12-09 15:03:19.785515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:41.688 [2024-12-09 15:03:19.785526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:41.688 [2024-12-09 15:03:19.785537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.688 [2024-12-09 15:03:19.794478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.688 [2024-12-09 15:03:19.794537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:41.689 [2024-12-09 15:03:19.794549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.897 ms 00:30:41.689 [2024-12-09 15:03:19.794559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.689 [2024-12-09 15:03:19.794610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.689 [2024-12-09 15:03:19.794621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:41.689 [2024-12-09 15:03:19.794630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:41.689 [2024-12-09 15:03:19.794643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.689 [2024-12-09 15:03:19.794705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.689 [2024-12-09 15:03:19.794719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:41.689 [2024-12-09 15:03:19.794731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:41.689 [2024-12-09 15:03:19.794741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.689 [2024-12-09 15:03:19.794765] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:41.689 [2024-12-09 15:03:19.799287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.689 [2024-12-09 15:03:19.799331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:41.689 [2024-12-09 15:03:19.799346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.526 ms 00:30:41.689 [2024-12-09 15:03:19.799353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.689 [2024-12-09 15:03:19.799391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.689 [2024-12-09 15:03:19.799399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:41.689 [2024-12-09 15:03:19.799411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:41.689 [2024-12-09 15:03:19.799418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.689 [2024-12-09 15:03:19.799460] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:41.689 [2024-12-09 15:03:19.799612] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:41.689 [2024-12-09 15:03:19.799630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:41.689 [2024-12-09 15:03:19.799641] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:41.689 [2024-12-09 15:03:19.799654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:41.689 [2024-12-09 15:03:19.799664] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:41.689 [2024-12-09 15:03:19.799675] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:41.689 [2024-12-09 15:03:19.799683] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:41.689 [2024-12-09 15:03:19.799697] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:41.689 [2024-12-09 15:03:19.799706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:41.689 [2024-12-09 15:03:19.799716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.689 [2024-12-09 15:03:19.799723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:41.689 [2024-12-09 15:03:19.799733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:30:41.689 [2024-12-09 15:03:19.799741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.689 [2024-12-09 15:03:19.799849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.689 [2024-12-09 15:03:19.799874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:41.689 [2024-12-09 15:03:19.799886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.089 ms 00:30:41.689 [2024-12-09 15:03:19.799893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.689 [2024-12-09 15:03:19.800000] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:41.689 [2024-12-09 15:03:19.800011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:41.689 [2024-12-09 15:03:19.800022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:41.689 [2024-12-09 15:03:19.800047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:41.689 [2024-12-09 15:03:19.800064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:41.689 [2024-12-09 15:03:19.800073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:41.689 [2024-12-09 15:03:19.800080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:41.689 [2024-12-09 15:03:19.800098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:41.689 [2024-12-09 15:03:19.800107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:41.689 [2024-12-09 15:03:19.800123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:41.689 [2024-12-09 15:03:19.800130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:41.689 [2024-12-09 15:03:19.800148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:41.689 [2024-12-09 15:03:19.800156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:41.689 [2024-12-09 15:03:19.800172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:41.689 [2024-12-09 15:03:19.800179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:41.689 [2024-12-09 15:03:19.800198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:41.689 [2024-12-09 15:03:19.800207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:41.689 [2024-12-09 15:03:19.800223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:41.689 [2024-12-09 15:03:19.800230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:41.689 [2024-12-09 15:03:19.800245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:41.689 [2024-12-09 15:03:19.800254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:41.689 [2024-12-09 15:03:19.800272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:41.689 [2024-12-09 15:03:19.800279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:41.689 [2024-12-09 15:03:19.800295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:41.689 [2024-12-09 15:03:19.800321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:41.689 [2024-12-09 15:03:19.800343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:41.689 [2024-12-09 15:03:19.800352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800358] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:41.689 [2024-12-09 15:03:19.800368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:41.689 [2024-12-09 15:03:19.800375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:41.689 [2024-12-09 15:03:19.800393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:41.689 [2024-12-09 15:03:19.800404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:41.689 [2024-12-09 15:03:19.800410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:41.689 [2024-12-09 15:03:19.800419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:41.689 [2024-12-09 15:03:19.800426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:41.689 [2024-12-09 15:03:19.800436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:41.689 [2024-12-09 15:03:19.800446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:41.689 [2024-12-09 15:03:19.800462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:41.689 [2024-12-09 15:03:19.800481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:41.689 [2024-12-09 15:03:19.800506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:41.689 [2024-12-09 15:03:19.800515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:41.689 [2024-12-09 15:03:19.800522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:41.689 [2024-12-09 15:03:19.800534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:41.689 [2024-12-09 15:03:19.800577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:41.690 [2024-12-09 15:03:19.800586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:41.690 [2024-12-09 15:03:19.800593] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:41.690 [2024-12-09 15:03:19.800605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:41.690 [2024-12-09 15:03:19.800612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:41.690 [2024-12-09 15:03:19.800622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:41.690 [2024-12-09 15:03:19.800628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:41.690 [2024-12-09 15:03:19.800638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:41.690 [2024-12-09 15:03:19.800645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.690 [2024-12-09 15:03:19.800655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:41.690 [2024-12-09 15:03:19.800662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.717 ms 00:30:41.690 [2024-12-09 15:03:19.800672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.690 [2024-12-09 15:03:19.800710] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:41.690 [2024-12-09 15:03:19.800724] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:45.896 [2024-12-09 15:03:23.725320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.725414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:45.896 [2024-12-09 15:03:23.725432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3924.595 ms 00:30:45.896 [2024-12-09 15:03:23.725444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.757451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.757521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:45.896 [2024-12-09 15:03:23.757536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.758 ms 00:30:45.896 [2024-12-09 15:03:23.757547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.757632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.757645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:45.896 [2024-12-09 15:03:23.757655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:45.896 [2024-12-09 15:03:23.757672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.793338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.793411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:45.896 [2024-12-09 15:03:23.793424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.627 ms 00:30:45.896 [2024-12-09 15:03:23.793435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.793472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.793487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:45.896 [2024-12-09 15:03:23.793496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:45.896 [2024-12-09 15:03:23.793506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.794145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.794193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:45.896 [2024-12-09 15:03:23.794213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:30:45.896 [2024-12-09 15:03:23.794224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.794273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.794284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:45.896 [2024-12-09 15:03:23.794295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:45.896 [2024-12-09 15:03:23.794308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.812061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.812112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:45.896 [2024-12-09 15:03:23.812124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.734 ms 00:30:45.896 [2024-12-09 15:03:23.812134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.844466] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:45.896 [2024-12-09 15:03:23.845875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.845921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:45.896 [2024-12-09 15:03:23.845937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.645 ms 00:30:45.896 [2024-12-09 15:03:23.845946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.877212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.877267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:45.896 [2024-12-09 15:03:23.877284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.220 ms 00:30:45.896 [2024-12-09 15:03:23.877293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.877401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.877416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:45.896 [2024-12-09 15:03:23.877431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:30:45.896 [2024-12-09 15:03:23.877440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.903015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.903064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:45.896 [2024-12-09 15:03:23.903081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.517 ms 00:30:45.896 [2024-12-09 15:03:23.903091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.928038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.928083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:45.896 [2024-12-09 15:03:23.928098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.885 ms 00:30:45.896 [2024-12-09 15:03:23.928106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.896 [2024-12-09 15:03:23.928714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.896 [2024-12-09 15:03:23.928736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:45.897 [2024-12-09 15:03:23.928748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:30:45.897 [2024-12-09 15:03:23.928757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.897 [2024-12-09 15:03:24.015487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.897 [2024-12-09 15:03:24.015544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:45.897 [2024-12-09 15:03:24.015565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.679 ms 00:30:45.897 [2024-12-09 15:03:24.015574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.158 [2024-12-09 15:03:24.042843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.158 [2024-12-09 15:03:24.042894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:46.158 [2024-12-09 15:03:24.042910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.165 ms 00:30:46.158 [2024-12-09 15:03:24.042919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.158 [2024-12-09 15:03:24.068849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.158 [2024-12-09 15:03:24.068899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:46.158 [2024-12-09 15:03:24.068914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.861 ms 00:30:46.158 [2024-12-09 15:03:24.068921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.158 [2024-12-09 15:03:24.094644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.158 [2024-12-09 15:03:24.094689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:46.158 [2024-12-09 15:03:24.094705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.667 ms 00:30:46.158 [2024-12-09 15:03:24.094712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.158 [2024-12-09 15:03:24.094769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.158 [2024-12-09 15:03:24.094779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:46.158 [2024-12-09 15:03:24.094794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:46.158 [2024-12-09 15:03:24.094815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.158 [2024-12-09 15:03:24.094911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.158 [2024-12-09 15:03:24.094925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:46.158 [2024-12-09 15:03:24.094949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:46.158 [2024-12-09 15:03:24.094957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.158 [2024-12-09 15:03:24.096302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4313.335 ms, result 0 00:30:46.158 { 00:30:46.158 "name": "ftl", 00:30:46.158 "uuid": "684f461e-728a-44da-8725-22aee97c18c0" 00:30:46.158 } 00:30:46.158 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:46.419 [2024-12-09 15:03:24.319268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.420 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:46.681 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:46.681 [2024-12-09 15:03:24.739740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:46.681 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:46.943 [2024-12-09 15:03:24.957171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:46.943 15:03:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:47.214 Fill FTL, iteration 1 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:47.214 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84198 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84198 /var/tmp/spdk.tgt.sock 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84198 ']' 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.215 15:03:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:47.476 [2024-12-09 15:03:25.403525] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:30:47.476 [2024-12-09 15:03:25.403674] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84198 ] 00:30:47.476 [2024-12-09 15:03:25.568912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.738 [2024-12-09 15:03:25.689438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.311 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:48.311 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:48.311 15:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:48.572 ftln1 00:30:48.572 15:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:48.572 15:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84198 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84198 ']' 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84198 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84198 00:30:48.833 killing process with pid 84198 00:30:48.833 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:48.834 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:48.834 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84198' 00:30:48.834 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84198 00:30:48.834 15:03:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84198 00:30:50.768 15:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:50.768 15:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:50.768 [2024-12-09 15:03:28.544317] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:30:50.768 [2024-12-09 15:03:28.544582] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84245 ] 00:30:50.768 [2024-12-09 15:03:28.702555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.768 [2024-12-09 15:03:28.810140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.153  [2024-12-09T15:03:31.219Z] Copying: 183/1024 [MB] (183 MBps) [2024-12-09T15:03:32.607Z] Copying: 396/1024 [MB] (213 MBps) [2024-12-09T15:03:33.551Z] Copying: 618/1024 [MB] (222 MBps) [2024-12-09T15:03:34.123Z] Copying: 842/1024 [MB] (224 MBps) [2024-12-09T15:03:34.695Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:30:56.573 00:30:56.574 Calculate MD5 checksum, iteration 1 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:56.574 15:03:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:56.574 [2024-12-09 15:03:34.678618] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:30:56.574 [2024-12-09 15:03:34.678730] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84309 ] 00:30:56.834 [2024-12-09 15:03:34.833877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.834 [2024-12-09 15:03:34.931139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.222  [2024-12-09T15:03:36.916Z] Copying: 621/1024 [MB] (621 MBps) [2024-12-09T15:03:37.488Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:30:59.366 00:30:59.366 15:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:59.366 15:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:01.900 Fill FTL, iteration 2 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=117166d35b61a340f9b65dc4311ccd92 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:01.900 15:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:01.900 [2024-12-09 15:03:39.613212] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:01.900 [2024-12-09 15:03:39.613325] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84367 ] 00:31:01.900 [2024-12-09 15:03:39.768709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.900 [2024-12-09 15:03:39.855936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.282  [2024-12-09T15:03:42.347Z] Copying: 225/1024 [MB] (225 MBps) [2024-12-09T15:03:43.290Z] Copying: 466/1024 [MB] (241 MBps) [2024-12-09T15:03:44.231Z] Copying: 686/1024 [MB] (220 MBps) [2024-12-09T15:03:44.804Z] Copying: 920/1024 [MB] (234 MBps) [2024-12-09T15:03:45.377Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:31:07.255 00:31:07.255 Calculate MD5 checksum, iteration 2 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:07.255 15:03:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:07.255 [2024-12-09 15:03:45.368882] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:07.255 [2024-12-09 15:03:45.368998] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84427 ] 00:31:07.516 [2024-12-09 15:03:45.525643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.516 [2024-12-09 15:03:45.629481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.433  [2024-12-09T15:03:47.816Z] Copying: 657/1024 [MB] (657 MBps) [2024-12-09T15:03:48.759Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:31:10.637 00:31:10.637 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:10.637 15:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:13.172 15:03:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:13.173 15:03:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a6b0b7746848d5a84a044335cb35f07e 00:31:13.173 15:03:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:13.173 15:03:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:13.173 15:03:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:13.173 [2024-12-09 15:03:51.040672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.173 [2024-12-09 15:03:51.040712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:13.173 [2024-12-09 15:03:51.040723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:13.173 [2024-12-09 15:03:51.040730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.173 [2024-12-09 15:03:51.040748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.173 [2024-12-09 15:03:51.040758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:13.173 [2024-12-09 15:03:51.040764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:13.173 [2024-12-09 15:03:51.040770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.173 [2024-12-09 15:03:51.040785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.173 [2024-12-09 15:03:51.040792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:13.173 [2024-12-09 15:03:51.040798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:13.173 [2024-12-09 15:03:51.040814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.173 [2024-12-09 15:03:51.040864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.183 ms, result 0 00:31:13.173 true 00:31:13.173 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:13.173 { 00:31:13.173 "name": "ftl", 00:31:13.173 "properties": [ 00:31:13.173 { 00:31:13.173 "name": "superblock_version", 00:31:13.173 "value": 5, 00:31:13.173 "read-only": true 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "name": "base_device", 00:31:13.173 "bands": [ 00:31:13.173 { 00:31:13.173 "id": 0, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 1, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 2, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 3, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 4, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 5, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 6, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 7, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 8, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 9, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 10, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 11, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 12, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 13, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 14, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 15, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 16, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 17, 00:31:13.173 "state": "FREE", 00:31:13.173 "validity": 0.0 00:31:13.173 } 00:31:13.173 ], 00:31:13.173 "read-only": true 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "name": "cache_device", 00:31:13.173 "type": "bdev", 00:31:13.173 "chunks": [ 00:31:13.173 { 00:31:13.173 "id": 0, 00:31:13.173 "state": "INACTIVE", 00:31:13.173 "utilization": 0.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 1, 00:31:13.173 "state": "CLOSED", 00:31:13.173 "utilization": 1.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 2, 00:31:13.173 "state": "CLOSED", 00:31:13.173 "utilization": 1.0 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 3, 00:31:13.173 "state": "OPEN", 00:31:13.173 "utilization": 0.001953125 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "id": 4, 00:31:13.173 "state": "OPEN", 00:31:13.173 "utilization": 0.0 00:31:13.173 } 00:31:13.173 ], 00:31:13.173 "read-only": true 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "name": "verbose_mode", 00:31:13.173 "value": true, 00:31:13.173 "unit": "", 00:31:13.173 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:13.173 }, 00:31:13.173 { 00:31:13.173 "name": "prep_upgrade_on_shutdown", 00:31:13.173 "value": false, 00:31:13.173 "unit": "", 00:31:13.173 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:13.173 } 00:31:13.173 ] 00:31:13.173 } 00:31:13.173 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:13.432 [2024-12-09 15:03:51.465027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.432 [2024-12-09 15:03:51.465062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:13.432 [2024-12-09 15:03:51.465071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:13.432 [2024-12-09 15:03:51.465078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.432 [2024-12-09 15:03:51.465095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.432 [2024-12-09 15:03:51.465102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:13.432 [2024-12-09 15:03:51.465108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:13.432 [2024-12-09 15:03:51.465113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.432 [2024-12-09 15:03:51.465128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.432 [2024-12-09 15:03:51.465134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:13.432 [2024-12-09 15:03:51.465140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:13.432 [2024-12-09 15:03:51.465145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.432 [2024-12-09 15:03:51.465188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.151 ms, result 0 00:31:13.432 true 00:31:13.432 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:13.432 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:13.432 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:13.690 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:13.690 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:13.690 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:13.948 [2024-12-09 15:03:51.837300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.948 [2024-12-09 15:03:51.837329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:13.948 [2024-12-09 15:03:51.837338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:13.948 [2024-12-09 15:03:51.837343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.948 [2024-12-09 15:03:51.837359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.948 [2024-12-09 15:03:51.837365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:13.948 [2024-12-09 15:03:51.837372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:13.948 [2024-12-09 15:03:51.837378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.948 [2024-12-09 15:03:51.837392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.948 [2024-12-09 15:03:51.837398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:13.948 [2024-12-09 15:03:51.837403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:13.948 [2024-12-09 15:03:51.837408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.948 [2024-12-09 15:03:51.837449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.139 ms, result 0 00:31:13.948 true 00:31:13.948 15:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:13.948 { 00:31:13.948 "name": "ftl", 00:31:13.948 "properties": [ 00:31:13.948 { 00:31:13.948 "name": "superblock_version", 00:31:13.949 "value": 5, 00:31:13.949 "read-only": true 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "name": "base_device", 00:31:13.949 "bands": [ 00:31:13.949 { 00:31:13.949 "id": 0, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 1, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 2, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 3, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 4, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 5, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 6, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 7, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 8, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 9, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 10, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 11, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 12, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 13, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 14, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 15, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 16, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 17, 00:31:13.949 "state": "FREE", 00:31:13.949 "validity": 0.0 00:31:13.949 } 00:31:13.949 ], 00:31:13.949 "read-only": true 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "name": "cache_device", 00:31:13.949 "type": "bdev", 00:31:13.949 "chunks": [ 00:31:13.949 { 00:31:13.949 "id": 0, 00:31:13.949 "state": "INACTIVE", 00:31:13.949 "utilization": 0.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 1, 00:31:13.949 "state": "CLOSED", 00:31:13.949 "utilization": 1.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 2, 00:31:13.949 "state": "CLOSED", 00:31:13.949 "utilization": 1.0 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 3, 00:31:13.949 "state": "OPEN", 00:31:13.949 "utilization": 0.001953125 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "id": 4, 00:31:13.949 "state": "OPEN", 00:31:13.949 "utilization": 0.0 00:31:13.949 } 00:31:13.949 ], 00:31:13.949 "read-only": true 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "name": "verbose_mode", 00:31:13.949 "value": true, 00:31:13.949 "unit": "", 00:31:13.949 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:13.949 }, 00:31:13.949 { 00:31:13.949 "name": "prep_upgrade_on_shutdown", 00:31:13.949 "value": true, 00:31:13.949 "unit": "", 00:31:13.949 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:13.949 } 00:31:13.949 ] 00:31:13.949 } 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84070 ]] 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84070 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84070 ']' 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84070 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.949 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84070 00:31:14.207 killing process with pid 84070 00:31:14.207 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:14.207 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:14.207 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84070' 00:31:14.207 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84070 00:31:14.207 15:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84070 00:31:14.774 [2024-12-09 15:03:52.621839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:14.774 [2024-12-09 15:03:52.632105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.774 [2024-12-09 15:03:52.632135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:14.774 [2024-12-09 15:03:52.632144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:14.774 [2024-12-09 15:03:52.632151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:14.774 [2024-12-09 15:03:52.632168] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:14.774 [2024-12-09 15:03:52.634215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:14.774 [2024-12-09 15:03:52.634235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:14.774 [2024-12-09 15:03:52.634244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.035 ms 00:31:14.774 [2024-12-09 15:03:52.634254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.668601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.668693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:24.860 [2024-12-09 15:04:01.668717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9034.302 ms 00:31:24.860 [2024-12-09 15:04:01.668728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.670526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.670566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:24.860 [2024-12-09 15:04:01.670579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.777 ms 00:31:24.860 [2024-12-09 15:04:01.670588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.671754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.671779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:24.860 [2024-12-09 15:04:01.671790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.128 ms 00:31:24.860 [2024-12-09 15:04:01.671818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.683552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.683599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:24.860 [2024-12-09 15:04:01.683610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.693 ms 00:31:24.860 [2024-12-09 15:04:01.683620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.691864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.691913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:24.860 [2024-12-09 15:04:01.691924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.193 ms 00:31:24.860 [2024-12-09 15:04:01.691933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.692070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.692090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:24.860 [2024-12-09 15:04:01.692100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:31:24.860 [2024-12-09 15:04:01.692108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.703466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.703515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:24.860 [2024-12-09 15:04:01.703528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.337 ms 00:31:24.860 [2024-12-09 15:04:01.703535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.714316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.714362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:24.860 [2024-12-09 15:04:01.714373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.730 ms 00:31:24.860 [2024-12-09 15:04:01.714380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.724914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.724959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:24.860 [2024-12-09 15:04:01.724970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.484 ms 00:31:24.860 [2024-12-09 15:04:01.724978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.735522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.735566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:24.860 [2024-12-09 15:04:01.735576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.437 ms 00:31:24.860 [2024-12-09 15:04:01.735583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.735629] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:24.860 [2024-12-09 15:04:01.735657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:24.860 [2024-12-09 15:04:01.735668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:24.860 [2024-12-09 15:04:01.735676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:24.860 [2024-12-09 15:04:01.735685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:24.860 [2024-12-09 15:04:01.735821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:24.860 [2024-12-09 15:04:01.735829] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 684f461e-728a-44da-8725-22aee97c18c0 00:31:24.860 [2024-12-09 15:04:01.735838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:24.860 [2024-12-09 15:04:01.735845] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:24.860 [2024-12-09 15:04:01.735853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:24.860 [2024-12-09 15:04:01.735862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:24.860 [2024-12-09 15:04:01.735873] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:24.860 [2024-12-09 15:04:01.735881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:24.860 [2024-12-09 15:04:01.735893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:24.860 [2024-12-09 15:04:01.735900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:24.860 [2024-12-09 15:04:01.735908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:24.860 [2024-12-09 15:04:01.735917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.735926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:24.860 [2024-12-09 15:04:01.735935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:31:24.860 [2024-12-09 15:04:01.735943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.749836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.749878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:24.860 [2024-12-09 15:04:01.749897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.855 ms 00:31:24.860 [2024-12-09 15:04:01.749907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.860 [2024-12-09 15:04:01.750303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.860 [2024-12-09 15:04:01.750321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:24.860 [2024-12-09 15:04:01.750332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.371 ms 00:31:24.860 [2024-12-09 15:04:01.750340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.797459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.797516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:24.861 [2024-12-09 15:04:01.797527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.797536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.797575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.797584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:24.861 [2024-12-09 15:04:01.797592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.797600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.797695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.797707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:24.861 [2024-12-09 15:04:01.797722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.797730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.797748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.797758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:24.861 [2024-12-09 15:04:01.797766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.797775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.883935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.883993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:24.861 [2024-12-09 15:04:01.884016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.884025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.955343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.955402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:24.861 [2024-12-09 15:04:01.955416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.955425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.955515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.955526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:24.861 [2024-12-09 15:04:01.955535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.955552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.955619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.955630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:24.861 [2024-12-09 15:04:01.955639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.955647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.955753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.955766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:24.861 [2024-12-09 15:04:01.955774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.955783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.955848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.955859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:24.861 [2024-12-09 15:04:01.955868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.955876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.955921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.955932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:24.861 [2024-12-09 15:04:01.955940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.955949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.956006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:24.861 [2024-12-09 15:04:01.956017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:24.861 [2024-12-09 15:04:01.956026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:24.861 [2024-12-09 15:04:01.956034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.861 [2024-12-09 15:04:01.956177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9324.005 ms, result 0 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84631 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84631 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84631 ']' 00:31:25.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:25.122 15:04:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:25.382 [2024-12-09 15:04:03.283994] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:25.382 [2024-12-09 15:04:03.284148] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84631 ] 00:31:25.382 [2024-12-09 15:04:03.446281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.642 [2024-12-09 15:04:03.568936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.587 [2024-12-09 15:04:04.364222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:26.587 [2024-12-09 15:04:04.364313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:26.587 [2024-12-09 15:04:04.518005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.518074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:26.587 [2024-12-09 15:04:04.518090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:26.587 [2024-12-09 15:04:04.518099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.518166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.518178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:26.587 [2024-12-09 15:04:04.518187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:31:26.587 [2024-12-09 15:04:04.518194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.518223] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:26.587 [2024-12-09 15:04:04.519032] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:26.587 [2024-12-09 15:04:04.519070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.519079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:26.587 [2024-12-09 15:04:04.519090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.858 ms 00:31:26.587 [2024-12-09 15:04:04.519098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.520897] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:26.587 [2024-12-09 15:04:04.535709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.535767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:26.587 [2024-12-09 15:04:04.535789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.815 ms 00:31:26.587 [2024-12-09 15:04:04.535797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.535895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.535906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:26.587 [2024-12-09 15:04:04.535915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:26.587 [2024-12-09 15:04:04.535923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.544840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.544887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:26.587 [2024-12-09 15:04:04.544899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.825 ms 00:31:26.587 [2024-12-09 15:04:04.544908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.544980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.544990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:26.587 [2024-12-09 15:04:04.544999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:26.587 [2024-12-09 15:04:04.545008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.545056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.545071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:26.587 [2024-12-09 15:04:04.545080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:26.587 [2024-12-09 15:04:04.545088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.545114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:26.587 [2024-12-09 15:04:04.549314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.549364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:26.587 [2024-12-09 15:04:04.549375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.206 ms 00:31:26.587 [2024-12-09 15:04:04.549388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.549424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.549433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:26.587 [2024-12-09 15:04:04.549442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:26.587 [2024-12-09 15:04:04.549450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.549511] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:26.587 [2024-12-09 15:04:04.549540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:26.587 [2024-12-09 15:04:04.549577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:26.587 [2024-12-09 15:04:04.549594] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:26.587 [2024-12-09 15:04:04.549700] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:26.587 [2024-12-09 15:04:04.549712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:26.587 [2024-12-09 15:04:04.549723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:26.587 [2024-12-09 15:04:04.549734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:26.587 [2024-12-09 15:04:04.549743] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:26.587 [2024-12-09 15:04:04.549755] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:26.587 [2024-12-09 15:04:04.549764] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:26.587 [2024-12-09 15:04:04.549772] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:26.587 [2024-12-09 15:04:04.549780] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:26.587 [2024-12-09 15:04:04.549788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.549795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:26.587 [2024-12-09 15:04:04.549820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:31:26.587 [2024-12-09 15:04:04.549827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.549914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.587 [2024-12-09 15:04:04.549923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:26.587 [2024-12-09 15:04:04.549934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:26.587 [2024-12-09 15:04:04.549942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.587 [2024-12-09 15:04:04.550047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:26.587 [2024-12-09 15:04:04.550069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:26.587 [2024-12-09 15:04:04.550078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:26.587 [2024-12-09 15:04:04.550086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.587 [2024-12-09 15:04:04.550095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:26.587 [2024-12-09 15:04:04.550103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:26.587 [2024-12-09 15:04:04.550110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:26.587 [2024-12-09 15:04:04.550118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:26.587 [2024-12-09 15:04:04.550126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:26.587 [2024-12-09 15:04:04.550133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.587 [2024-12-09 15:04:04.550140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:26.587 [2024-12-09 15:04:04.550148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:26.587 [2024-12-09 15:04:04.550155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.587 [2024-12-09 15:04:04.550164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:26.587 [2024-12-09 15:04:04.550171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:26.587 [2024-12-09 15:04:04.550178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.587 [2024-12-09 15:04:04.550185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:26.587 [2024-12-09 15:04:04.550193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:26.587 [2024-12-09 15:04:04.550199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.587 [2024-12-09 15:04:04.550207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:26.587 [2024-12-09 15:04:04.550214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:26.587 [2024-12-09 15:04:04.550221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:26.587 [2024-12-09 15:04:04.550228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:26.588 [2024-12-09 15:04:04.550244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:26.588 [2024-12-09 15:04:04.550251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:26.588 [2024-12-09 15:04:04.550257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:26.588 [2024-12-09 15:04:04.550265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:26.588 [2024-12-09 15:04:04.550272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:26.588 [2024-12-09 15:04:04.550278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:26.588 [2024-12-09 15:04:04.550286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:26.588 [2024-12-09 15:04:04.550292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:26.588 [2024-12-09 15:04:04.550299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:26.588 [2024-12-09 15:04:04.550306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:26.588 [2024-12-09 15:04:04.550312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.588 [2024-12-09 15:04:04.550318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:26.588 [2024-12-09 15:04:04.550325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:26.588 [2024-12-09 15:04:04.550331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.588 [2024-12-09 15:04:04.550338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:26.588 [2024-12-09 15:04:04.550345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:26.588 [2024-12-09 15:04:04.550351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.588 [2024-12-09 15:04:04.550358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:26.588 [2024-12-09 15:04:04.550364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:26.588 [2024-12-09 15:04:04.550371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.588 [2024-12-09 15:04:04.550378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:26.588 [2024-12-09 15:04:04.550385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:26.588 [2024-12-09 15:04:04.550396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:26.588 [2024-12-09 15:04:04.550404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:26.588 [2024-12-09 15:04:04.550416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:26.588 [2024-12-09 15:04:04.550423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:26.588 [2024-12-09 15:04:04.550430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:26.588 [2024-12-09 15:04:04.550437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:26.588 [2024-12-09 15:04:04.550445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:26.588 [2024-12-09 15:04:04.550452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:26.588 [2024-12-09 15:04:04.550461] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:26.588 [2024-12-09 15:04:04.550471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:26.588 [2024-12-09 15:04:04.550487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:26.588 [2024-12-09 15:04:04.550509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:26.588 [2024-12-09 15:04:04.550516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:26.588 [2024-12-09 15:04:04.550523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:26.588 [2024-12-09 15:04:04.550531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:26.588 [2024-12-09 15:04:04.550580] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:26.588 [2024-12-09 15:04:04.550589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:26.588 [2024-12-09 15:04:04.550605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:26.588 [2024-12-09 15:04:04.550613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:26.588 [2024-12-09 15:04:04.550622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:26.588 [2024-12-09 15:04:04.550629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.588 [2024-12-09 15:04:04.550637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:26.588 [2024-12-09 15:04:04.550646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.651 ms 00:31:26.588 [2024-12-09 15:04:04.550654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.588 [2024-12-09 15:04:04.550698] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:26.588 [2024-12-09 15:04:04.550714] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:30.795 [2024-12-09 15:04:08.908358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.795 [2024-12-09 15:04:08.908430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:30.795 [2024-12-09 15:04:08.908448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4357.646 ms 00:31:30.795 [2024-12-09 15:04:08.908457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:08.939841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:08.939902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:31.057 [2024-12-09 15:04:08.939917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.013 ms 00:31:31.057 [2024-12-09 15:04:08.939926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:08.940018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:08.940037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:31.057 [2024-12-09 15:04:08.940047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:31.057 [2024-12-09 15:04:08.940058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:08.975221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:08.975273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:31.057 [2024-12-09 15:04:08.975289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.103 ms 00:31:31.057 [2024-12-09 15:04:08.975298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:08.975342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:08.975352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:31.057 [2024-12-09 15:04:08.975361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:31.057 [2024-12-09 15:04:08.975369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:08.975979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:08.976013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:31.057 [2024-12-09 15:04:08.976025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.553 ms 00:31:31.057 [2024-12-09 15:04:08.976033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:08.976087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:08.976097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:31.057 [2024-12-09 15:04:08.976106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:31.057 [2024-12-09 15:04:08.976114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:08.993419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:08.993463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:31.057 [2024-12-09 15:04:08.993474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.280 ms 00:31:31.057 [2024-12-09 15:04:08.993482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.021407] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:31.057 [2024-12-09 15:04:09.021469] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:31.057 [2024-12-09 15:04:09.021485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.021495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:31.057 [2024-12-09 15:04:09.021506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.888 ms 00:31:31.057 [2024-12-09 15:04:09.021515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.036227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.036278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:31.057 [2024-12-09 15:04:09.036292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.653 ms 00:31:31.057 [2024-12-09 15:04:09.036300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.048752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.048808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:31.057 [2024-12-09 15:04:09.048820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.396 ms 00:31:31.057 [2024-12-09 15:04:09.048827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.061326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.061370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:31.057 [2024-12-09 15:04:09.061382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.451 ms 00:31:31.057 [2024-12-09 15:04:09.061390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.062057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.062087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:31.057 [2024-12-09 15:04:09.062098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:31:31.057 [2024-12-09 15:04:09.062106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.127340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.127404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:31.057 [2024-12-09 15:04:09.127419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.211 ms 00:31:31.057 [2024-12-09 15:04:09.127429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.138915] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:31.057 [2024-12-09 15:04:09.140053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.140094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:31.057 [2024-12-09 15:04:09.140106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.559 ms 00:31:31.057 [2024-12-09 15:04:09.140114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.140213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.140227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:31.057 [2024-12-09 15:04:09.140238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:31.057 [2024-12-09 15:04:09.140246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.057 [2024-12-09 15:04:09.140305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.057 [2024-12-09 15:04:09.140316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:31.057 [2024-12-09 15:04:09.140325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:31.057 [2024-12-09 15:04:09.140333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.058 [2024-12-09 15:04:09.140357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.058 [2024-12-09 15:04:09.140366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:31.058 [2024-12-09 15:04:09.140377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:31.058 [2024-12-09 15:04:09.140386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.058 [2024-12-09 15:04:09.140424] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:31.058 [2024-12-09 15:04:09.140436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.058 [2024-12-09 15:04:09.140444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:31.058 [2024-12-09 15:04:09.140453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:31.058 [2024-12-09 15:04:09.140462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.058 [2024-12-09 15:04:09.165549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.058 [2024-12-09 15:04:09.165608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:31.058 [2024-12-09 15:04:09.165622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.065 ms 00:31:31.058 [2024-12-09 15:04:09.165630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.058 [2024-12-09 15:04:09.165715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.058 [2024-12-09 15:04:09.165724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:31.058 [2024-12-09 15:04:09.165733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:31.058 [2024-12-09 15:04:09.165742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.058 [2024-12-09 15:04:09.167128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4648.574 ms, result 0 00:31:31.319 [2024-12-09 15:04:09.181995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.319 [2024-12-09 15:04:09.197992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:31.319 [2024-12-09 15:04:09.206159] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:31.319 15:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.319 15:04:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:31.319 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:31.319 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:31.319 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:31.580 [2024-12-09 15:04:09.506266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.580 [2024-12-09 15:04:09.506321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:31.581 [2024-12-09 15:04:09.506341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:31.581 [2024-12-09 15:04:09.506350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.581 [2024-12-09 15:04:09.506376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.581 [2024-12-09 15:04:09.506385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:31.581 [2024-12-09 15:04:09.506394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:31.581 [2024-12-09 15:04:09.506403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.581 [2024-12-09 15:04:09.506424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.581 [2024-12-09 15:04:09.506433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:31.581 [2024-12-09 15:04:09.506441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:31.581 [2024-12-09 15:04:09.506449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.581 [2024-12-09 15:04:09.506511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.246 ms, result 0 00:31:31.581 true 00:31:31.581 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:31.842 { 00:31:31.842 "name": "ftl", 00:31:31.842 "properties": [ 00:31:31.842 { 00:31:31.842 "name": "superblock_version", 00:31:31.842 "value": 5, 00:31:31.842 "read-only": true 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "name": "base_device", 00:31:31.842 "bands": [ 00:31:31.842 { 00:31:31.842 "id": 0, 00:31:31.842 "state": "CLOSED", 00:31:31.842 "validity": 1.0 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "id": 1, 00:31:31.842 "state": "CLOSED", 00:31:31.842 "validity": 1.0 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "id": 2, 00:31:31.842 "state": "CLOSED", 00:31:31.842 "validity": 0.007843137254901933 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "id": 3, 00:31:31.842 "state": "FREE", 00:31:31.842 "validity": 0.0 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "id": 4, 00:31:31.842 "state": "FREE", 00:31:31.842 "validity": 0.0 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "id": 5, 00:31:31.842 "state": "FREE", 00:31:31.842 "validity": 0.0 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "id": 6, 00:31:31.842 "state": "FREE", 00:31:31.842 "validity": 0.0 00:31:31.842 }, 00:31:31.842 { 00:31:31.842 "id": 7, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 8, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 9, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 10, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 11, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 12, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 13, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 14, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 15, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 16, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 17, 00:31:31.843 "state": "FREE", 00:31:31.843 "validity": 0.0 00:31:31.843 } 00:31:31.843 ], 00:31:31.843 "read-only": true 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "name": "cache_device", 00:31:31.843 "type": "bdev", 00:31:31.843 "chunks": [ 00:31:31.843 { 00:31:31.843 "id": 0, 00:31:31.843 "state": "INACTIVE", 00:31:31.843 "utilization": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 1, 00:31:31.843 "state": "OPEN", 00:31:31.843 "utilization": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 2, 00:31:31.843 "state": "OPEN", 00:31:31.843 "utilization": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 3, 00:31:31.843 "state": "FREE", 00:31:31.843 "utilization": 0.0 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "id": 4, 00:31:31.843 "state": "FREE", 00:31:31.843 "utilization": 0.0 00:31:31.843 } 00:31:31.843 ], 00:31:31.843 "read-only": true 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "name": "verbose_mode", 00:31:31.843 "value": true, 00:31:31.843 "unit": "", 00:31:31.843 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:31.843 }, 00:31:31.843 { 00:31:31.843 "name": "prep_upgrade_on_shutdown", 00:31:31.843 "value": false, 00:31:31.843 "unit": "", 00:31:31.843 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:31.843 } 00:31:31.843 ] 00:31:31.843 } 00:31:31.843 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:31.843 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:31.843 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:32.104 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:32.104 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:32.104 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:32.104 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:32.104 15:04:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:32.104 Validate MD5 checksum, iteration 1 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:32.104 15:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:32.366 [2024-12-09 15:04:10.261914] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:32.366 [2024-12-09 15:04:10.262060] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84720 ] 00:31:32.366 [2024-12-09 15:04:10.426449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.627 [2024-12-09 15:04:10.568144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.541  [2024-12-09T15:04:13.235Z] Copying: 575/1024 [MB] (575 MBps) [2024-12-09T15:04:14.181Z] Copying: 1024/1024 [MB] (average 565 MBps) 00:31:36.059 00:31:36.059 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:36.059 15:04:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:38.592 Validate MD5 checksum, iteration 2 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=117166d35b61a340f9b65dc4311ccd92 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 117166d35b61a340f9b65dc4311ccd92 != \1\1\7\1\6\6\d\3\5\b\6\1\a\3\4\0\f\9\b\6\5\d\c\4\3\1\1\c\c\d\9\2 ]] 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:38.592 15:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:38.592 [2024-12-09 15:04:16.332430] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:38.592 [2024-12-09 15:04:16.332551] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84787 ] 00:31:38.592 [2024-12-09 15:04:16.489020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.592 [2024-12-09 15:04:16.577628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.976  [2024-12-09T15:04:19.036Z] Copying: 380/1024 [MB] (380 MBps) [2024-12-09T15:04:19.977Z] Copying: 754/1024 [MB] (374 MBps) [2024-12-09T15:04:20.917Z] Copying: 1024/1024 [MB] (average 386 MBps) 00:31:42.795 00:31:42.795 15:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:42.795 15:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a6b0b7746848d5a84a044335cb35f07e 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a6b0b7746848d5a84a044335cb35f07e != \a\6\b\0\b\7\7\4\6\8\4\8\d\5\a\8\4\a\0\4\4\3\3\5\c\b\3\5\f\0\7\e ]] 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84631 ]] 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84631 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:44.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84854 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84854 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84854 ']' 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.170 15:04:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.171 15:04:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:44.171 15:04:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:44.429 [2024-12-09 15:04:22.326408] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:44.429 [2024-12-09 15:04:22.326523] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84854 ] 00:31:44.429 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84631 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:44.429 [2024-12-09 15:04:22.480359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.688 [2024-12-09 15:04:22.553885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.256 [2024-12-09 15:04:23.126327] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:45.256 [2024-12-09 15:04:23.126386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:45.256 [2024-12-09 15:04:23.269120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.269153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:45.256 [2024-12-09 15:04:23.269162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:45.256 [2024-12-09 15:04:23.269169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.269207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.269215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:45.256 [2024-12-09 15:04:23.269221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:45.256 [2024-12-09 15:04:23.269227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.269244] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:45.256 [2024-12-09 15:04:23.269735] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:45.256 [2024-12-09 15:04:23.269753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.269758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:45.256 [2024-12-09 15:04:23.269765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:31:45.256 [2024-12-09 15:04:23.269770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.270048] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:45.256 [2024-12-09 15:04:23.282230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.282259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:45.256 [2024-12-09 15:04:23.282268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.183 ms 00:31:45.256 [2024-12-09 15:04:23.282275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.288946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.288973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:45.256 [2024-12-09 15:04:23.288980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:45.256 [2024-12-09 15:04:23.288985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.289219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.289235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:45.256 [2024-12-09 15:04:23.289241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:31:45.256 [2024-12-09 15:04:23.289247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.289286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.289292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:45.256 [2024-12-09 15:04:23.289298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:45.256 [2024-12-09 15:04:23.289304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.289321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.289327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:45.256 [2024-12-09 15:04:23.289333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:45.256 [2024-12-09 15:04:23.289339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.289353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:45.256 [2024-12-09 15:04:23.291545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.291567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:45.256 [2024-12-09 15:04:23.291575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.194 ms 00:31:45.256 [2024-12-09 15:04:23.291580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.291602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.291608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:45.256 [2024-12-09 15:04:23.291614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:45.256 [2024-12-09 15:04:23.291619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.291635] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:45.256 [2024-12-09 15:04:23.291649] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:45.256 [2024-12-09 15:04:23.291675] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:45.256 [2024-12-09 15:04:23.291687] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:45.256 [2024-12-09 15:04:23.291766] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:45.256 [2024-12-09 15:04:23.291774] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:45.256 [2024-12-09 15:04:23.291782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:45.256 [2024-12-09 15:04:23.291790] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:45.256 [2024-12-09 15:04:23.291797] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:45.256 [2024-12-09 15:04:23.291823] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:45.256 [2024-12-09 15:04:23.291828] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:45.256 [2024-12-09 15:04:23.291834] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:45.256 [2024-12-09 15:04:23.291839] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:45.256 [2024-12-09 15:04:23.291847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.291852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:45.256 [2024-12-09 15:04:23.291858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:31:45.256 [2024-12-09 15:04:23.291863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.291928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.256 [2024-12-09 15:04:23.291934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:45.256 [2024-12-09 15:04:23.291939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:31:45.256 [2024-12-09 15:04:23.291944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.256 [2024-12-09 15:04:23.292018] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:45.256 [2024-12-09 15:04:23.292027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:45.256 [2024-12-09 15:04:23.292033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:45.256 [2024-12-09 15:04:23.292038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.256 [2024-12-09 15:04:23.292044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:45.256 [2024-12-09 15:04:23.292049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:45.256 [2024-12-09 15:04:23.292054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:45.256 [2024-12-09 15:04:23.292059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:45.256 [2024-12-09 15:04:23.292065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:45.256 [2024-12-09 15:04:23.292069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:45.257 [2024-12-09 15:04:23.292079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:45.257 [2024-12-09 15:04:23.292084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:45.257 [2024-12-09 15:04:23.292095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:45.257 [2024-12-09 15:04:23.292100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:45.257 [2024-12-09 15:04:23.292110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:45.257 [2024-12-09 15:04:23.292115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:45.257 [2024-12-09 15:04:23.292125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:45.257 [2024-12-09 15:04:23.292134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.257 [2024-12-09 15:04:23.292139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:45.257 [2024-12-09 15:04:23.292144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:45.257 [2024-12-09 15:04:23.292149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.257 [2024-12-09 15:04:23.292154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:45.257 [2024-12-09 15:04:23.292158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:45.257 [2024-12-09 15:04:23.292163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.257 [2024-12-09 15:04:23.292168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:45.257 [2024-12-09 15:04:23.292173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:45.257 [2024-12-09 15:04:23.292178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:45.257 [2024-12-09 15:04:23.292183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:45.257 [2024-12-09 15:04:23.292188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:45.257 [2024-12-09 15:04:23.292192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:45.257 [2024-12-09 15:04:23.292202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:45.257 [2024-12-09 15:04:23.292207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:45.257 [2024-12-09 15:04:23.292216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:45.257 [2024-12-09 15:04:23.292230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:45.257 [2024-12-09 15:04:23.292235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292240] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:45.257 [2024-12-09 15:04:23.292246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:45.257 [2024-12-09 15:04:23.292252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:45.257 [2024-12-09 15:04:23.292258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:45.257 [2024-12-09 15:04:23.292263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:45.257 [2024-12-09 15:04:23.292268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:45.257 [2024-12-09 15:04:23.292273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:45.257 [2024-12-09 15:04:23.292278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:45.257 [2024-12-09 15:04:23.292283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:45.257 [2024-12-09 15:04:23.292288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:45.257 [2024-12-09 15:04:23.292295] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:45.257 [2024-12-09 15:04:23.292301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:45.257 [2024-12-09 15:04:23.292313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:45.257 [2024-12-09 15:04:23.292329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:45.257 [2024-12-09 15:04:23.292335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:45.257 [2024-12-09 15:04:23.292340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:45.257 [2024-12-09 15:04:23.292345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:45.257 [2024-12-09 15:04:23.292382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:45.257 [2024-12-09 15:04:23.292389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:45.257 [2024-12-09 15:04:23.292402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:45.257 [2024-12-09 15:04:23.292407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:45.257 [2024-12-09 15:04:23.292412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:45.257 [2024-12-09 15:04:23.292418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.292423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:45.257 [2024-12-09 15:04:23.292430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.452 ms 00:31:45.257 [2024-12-09 15:04:23.292436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.311406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.311433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:45.257 [2024-12-09 15:04:23.311441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.934 ms 00:31:45.257 [2024-12-09 15:04:23.311446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.311473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.311479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:45.257 [2024-12-09 15:04:23.311485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:45.257 [2024-12-09 15:04:23.311490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.335129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.335156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:45.257 [2024-12-09 15:04:23.335164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.598 ms 00:31:45.257 [2024-12-09 15:04:23.335170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.335188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.335194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:45.257 [2024-12-09 15:04:23.335201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:45.257 [2024-12-09 15:04:23.335208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.335274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.335281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:45.257 [2024-12-09 15:04:23.335287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:45.257 [2024-12-09 15:04:23.335293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.335322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.335328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:45.257 [2024-12-09 15:04:23.335334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:45.257 [2024-12-09 15:04:23.335340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.346689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.346716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:45.257 [2024-12-09 15:04:23.346724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.330 ms 00:31:45.257 [2024-12-09 15:04:23.346729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.346812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.346821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:45.257 [2024-12-09 15:04:23.346827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:45.257 [2024-12-09 15:04:23.346833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.257 [2024-12-09 15:04:23.375533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.257 [2024-12-09 15:04:23.375566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:45.258 [2024-12-09 15:04:23.375576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.685 ms 00:31:45.258 [2024-12-09 15:04:23.375582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.516 [2024-12-09 15:04:23.382519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.516 [2024-12-09 15:04:23.382547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:45.516 [2024-12-09 15:04:23.382560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:31:45.516 [2024-12-09 15:04:23.382566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.516 [2024-12-09 15:04:23.424901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.516 [2024-12-09 15:04:23.424942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:45.516 [2024-12-09 15:04:23.424951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.293 ms 00:31:45.516 [2024-12-09 15:04:23.424958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.516 [2024-12-09 15:04:23.425062] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:45.516 [2024-12-09 15:04:23.425135] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:45.516 [2024-12-09 15:04:23.425207] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:45.516 [2024-12-09 15:04:23.425278] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:45.516 [2024-12-09 15:04:23.425286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.516 [2024-12-09 15:04:23.425292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:45.516 [2024-12-09 15:04:23.425299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:31:45.516 [2024-12-09 15:04:23.425305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.516 [2024-12-09 15:04:23.425344] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:45.516 [2024-12-09 15:04:23.425353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.516 [2024-12-09 15:04:23.425361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:45.516 [2024-12-09 15:04:23.425368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:45.516 [2024-12-09 15:04:23.425373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.516 [2024-12-09 15:04:23.436149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.516 [2024-12-09 15:04:23.436181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:45.516 [2024-12-09 15:04:23.436189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.760 ms 00:31:45.516 [2024-12-09 15:04:23.436195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.516 [2024-12-09 15:04:23.442491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.516 [2024-12-09 15:04:23.442518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:45.516 [2024-12-09 15:04:23.442526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:45.516 [2024-12-09 15:04:23.442532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.516 [2024-12-09 15:04:23.442593] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:45.516 [2024-12-09 15:04:23.442704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.516 [2024-12-09 15:04:23.442719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:45.516 [2024-12-09 15:04:23.442726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:31:45.516 [2024-12-09 15:04:23.442731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.088 [2024-12-09 15:04:23.928826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.088 [2024-12-09 15:04:23.928892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:46.089 [2024-12-09 15:04:23.928908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 485.437 ms 00:31:46.089 [2024-12-09 15:04:23.928917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.089 [2024-12-09 15:04:23.933304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.089 [2024-12-09 15:04:23.933344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:46.089 [2024-12-09 15:04:23.933356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.415 ms 00:31:46.089 [2024-12-09 15:04:23.933365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.089 [2024-12-09 15:04:23.934227] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:46.089 [2024-12-09 15:04:23.934264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.089 [2024-12-09 15:04:23.934274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:46.089 [2024-12-09 15:04:23.934284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.863 ms 00:31:46.089 [2024-12-09 15:04:23.934292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.089 [2024-12-09 15:04:23.934325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.089 [2024-12-09 15:04:23.934335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:46.089 [2024-12-09 15:04:23.934344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:46.089 [2024-12-09 15:04:23.934356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.089 [2024-12-09 15:04:23.934390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 491.793 ms, result 0 00:31:46.089 [2024-12-09 15:04:23.934428] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:46.089 [2024-12-09 15:04:23.934506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.089 [2024-12-09 15:04:23.934517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:46.089 [2024-12-09 15:04:23.934524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:31:46.089 [2024-12-09 15:04:23.934531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.640501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.640567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:46.663 [2024-12-09 15:04:24.640592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 704.912 ms 00:31:46.663 [2024-12-09 15:04:24.640600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.644074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.644110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:46.663 [2024-12-09 15:04:24.644119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.935 ms 00:31:46.663 [2024-12-09 15:04:24.644126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.644463] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:46.663 [2024-12-09 15:04:24.644497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.644503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:46.663 [2024-12-09 15:04:24.644511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:31:46.663 [2024-12-09 15:04:24.644517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.644544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.644551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:46.663 [2024-12-09 15:04:24.644558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:46.663 [2024-12-09 15:04:24.644564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.644593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 710.161 ms, result 0 00:31:46.663 [2024-12-09 15:04:24.644629] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:46.663 [2024-12-09 15:04:24.644637] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:46.663 [2024-12-09 15:04:24.644646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.644653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:46.663 [2024-12-09 15:04:24.644660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1202.068 ms 00:31:46.663 [2024-12-09 15:04:24.644666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.644690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.644701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:46.663 [2024-12-09 15:04:24.644708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:46.663 [2024-12-09 15:04:24.644714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.653916] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:46.663 [2024-12-09 15:04:24.654013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.654023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:46.663 [2024-12-09 15:04:24.654031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.286 ms 00:31:46.663 [2024-12-09 15:04:24.654037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.654576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.654599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:46.663 [2024-12-09 15:04:24.654609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.484 ms 00:31:46.663 [2024-12-09 15:04:24.654615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.656307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.656328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:46.663 [2024-12-09 15:04:24.656336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.679 ms 00:31:46.663 [2024-12-09 15:04:24.656343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.656374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.656381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:46.663 [2024-12-09 15:04:24.656388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:46.663 [2024-12-09 15:04:24.656397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.656477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.656485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:46.663 [2024-12-09 15:04:24.656491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:46.663 [2024-12-09 15:04:24.656497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.656515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.656521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:46.663 [2024-12-09 15:04:24.656531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:46.663 [2024-12-09 15:04:24.656537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.656562] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:46.663 [2024-12-09 15:04:24.656570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.656576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:46.663 [2024-12-09 15:04:24.656582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:46.663 [2024-12-09 15:04:24.656588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.656625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.663 [2024-12-09 15:04:24.656632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:46.663 [2024-12-09 15:04:24.656639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:46.663 [2024-12-09 15:04:24.656644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.663 [2024-12-09 15:04:24.659024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1388.572 ms, result 0 00:31:46.663 [2024-12-09 15:04:24.671536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.663 [2024-12-09 15:04:24.687476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:46.663 [2024-12-09 15:04:24.695769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:46.925 Validate MD5 checksum, iteration 1 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:46.925 15:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:46.925 [2024-12-09 15:04:24.904161] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:46.925 [2024-12-09 15:04:24.904317] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84883 ] 00:31:47.185 [2024-12-09 15:04:25.067124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.185 [2024-12-09 15:04:25.178419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.568  [2024-12-09T15:04:27.262Z] Copying: 687/1024 [MB] (687 MBps) [2024-12-09T15:04:28.205Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:31:50.083 00:31:50.345 15:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:50.345 15:04:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:52.251 Validate MD5 checksum, iteration 2 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=117166d35b61a340f9b65dc4311ccd92 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 117166d35b61a340f9b65dc4311ccd92 != \1\1\7\1\6\6\d\3\5\b\6\1\a\3\4\0\f\9\b\6\5\d\c\4\3\1\1\c\c\d\9\2 ]] 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:52.251 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:52.252 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:52.252 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:52.252 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:52.252 15:04:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:52.510 [2024-12-09 15:04:30.395409] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:31:52.510 [2024-12-09 15:04:30.395693] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84945 ] 00:31:52.510 [2024-12-09 15:04:30.556989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.768 [2024-12-09 15:04:30.662792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.153  [2024-12-09T15:04:33.213Z] Copying: 582/1024 [MB] (582 MBps) [2024-12-09T15:04:37.411Z] Copying: 1024/1024 [MB] (average 587 MBps) 00:31:59.289 00:31:59.289 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:59.289 15:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a6b0b7746848d5a84a044335cb35f07e 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a6b0b7746848d5a84a044335cb35f07e != \a\6\b\0\b\7\7\4\6\8\4\8\d\5\a\8\4\a\0\4\4\3\3\5\c\b\3\5\f\0\7\e ]] 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:00.701 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84854 ]] 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84854 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84854 ']' 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84854 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84854 00:32:00.974 killing process with pid 84854 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84854' 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84854 00:32:00.974 15:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84854 00:32:01.543 [2024-12-09 15:04:39.440017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:01.543 [2024-12-09 15:04:39.450089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.450123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:01.543 [2024-12-09 15:04:39.450134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:01.543 [2024-12-09 15:04:39.450141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.450157] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:01.543 [2024-12-09 15:04:39.452255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.452283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:01.543 [2024-12-09 15:04:39.452291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.087 ms 00:32:01.543 [2024-12-09 15:04:39.452297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.452477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.452495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:01.543 [2024-12-09 15:04:39.452502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:32:01.543 [2024-12-09 15:04:39.452508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.453653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.453676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:01.543 [2024-12-09 15:04:39.453684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.134 ms 00:32:01.543 [2024-12-09 15:04:39.453693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.454554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.454574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:01.543 [2024-12-09 15:04:39.454582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:32:01.543 [2024-12-09 15:04:39.454588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.462061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.462089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:01.543 [2024-12-09 15:04:39.462100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.447 ms 00:32:01.543 [2024-12-09 15:04:39.462106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.466358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.466385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:01.543 [2024-12-09 15:04:39.466394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.226 ms 00:32:01.543 [2024-12-09 15:04:39.466401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.466456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.466464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:01.543 [2024-12-09 15:04:39.466471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:32:01.543 [2024-12-09 15:04:39.466480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.475372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.475398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:01.543 [2024-12-09 15:04:39.475405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.880 ms 00:32:01.543 [2024-12-09 15:04:39.475411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.482324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.482349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:01.543 [2024-12-09 15:04:39.482356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.889 ms 00:32:01.543 [2024-12-09 15:04:39.482362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.489487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.489511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:01.543 [2024-12-09 15:04:39.489518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.101 ms 00:32:01.543 [2024-12-09 15:04:39.489525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.496523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.496548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:01.543 [2024-12-09 15:04:39.496555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.955 ms 00:32:01.543 [2024-12-09 15:04:39.496561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.496583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:01.543 [2024-12-09 15:04:39.496594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:01.543 [2024-12-09 15:04:39.496602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:01.543 [2024-12-09 15:04:39.496608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:01.543 [2024-12-09 15:04:39.496614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:01.543 [2024-12-09 15:04:39.496700] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:01.543 [2024-12-09 15:04:39.496706] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 684f461e-728a-44da-8725-22aee97c18c0 00:32:01.543 [2024-12-09 15:04:39.496713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:01.543 [2024-12-09 15:04:39.496718] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:01.543 [2024-12-09 15:04:39.496724] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:01.543 [2024-12-09 15:04:39.496729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:01.543 [2024-12-09 15:04:39.496734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:01.543 [2024-12-09 15:04:39.496740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:01.543 [2024-12-09 15:04:39.496749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:01.543 [2024-12-09 15:04:39.496754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:01.543 [2024-12-09 15:04:39.496758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:01.543 [2024-12-09 15:04:39.496764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.496770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:01.543 [2024-12-09 15:04:39.496777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:32:01.543 [2024-12-09 15:04:39.496783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.506274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.506298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:01.543 [2024-12-09 15:04:39.506306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.463 ms 00:32:01.543 [2024-12-09 15:04:39.506312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.506582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.543 [2024-12-09 15:04:39.506597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:01.543 [2024-12-09 15:04:39.506604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:32:01.543 [2024-12-09 15:04:39.506609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.539382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.543 [2024-12-09 15:04:39.539409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:01.543 [2024-12-09 15:04:39.539417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.543 [2024-12-09 15:04:39.539427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.539447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.543 [2024-12-09 15:04:39.539453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:01.543 [2024-12-09 15:04:39.539460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.543 [2024-12-09 15:04:39.539465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.539523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.543 [2024-12-09 15:04:39.539531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:01.543 [2024-12-09 15:04:39.539537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.543 [2024-12-09 15:04:39.539543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.543 [2024-12-09 15:04:39.539559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.543 [2024-12-09 15:04:39.539565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:01.544 [2024-12-09 15:04:39.539571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.539577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.599207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.599237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:01.544 [2024-12-09 15:04:39.599245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.599251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.646914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.646951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:01.544 [2024-12-09 15:04:39.646959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.646965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.647012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.647020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:01.544 [2024-12-09 15:04:39.647027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.647032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.647075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.647091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:01.544 [2024-12-09 15:04:39.647097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.647103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.647171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.647177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:01.544 [2024-12-09 15:04:39.647184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.647190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.647212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.647219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:01.544 [2024-12-09 15:04:39.647227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.647233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.647259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.647266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:01.544 [2024-12-09 15:04:39.647272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.647278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.647309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:01.544 [2024-12-09 15:04:39.647318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:01.544 [2024-12-09 15:04:39.647324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:01.544 [2024-12-09 15:04:39.647329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.544 [2024-12-09 15:04:39.647417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.306 ms, result 0 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:02.479 Remove shared memory files 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84631 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:02.479 00:32:02.479 real 1m23.973s 00:32:02.479 user 1m56.699s 00:32:02.479 sys 0m19.852s 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.479 15:04:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:02.479 ************************************ 00:32:02.479 END TEST ftl_upgrade_shutdown 00:32:02.479 ************************************ 00:32:02.479 15:04:40 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:02.479 15:04:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:02.479 15:04:40 ftl -- ftl/ftl.sh@14 -- # killprocess 76446 00:32:02.479 15:04:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 76446 ']' 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@958 -- # kill -0 76446 00:32:02.480 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76446) - No such process 00:32:02.480 Process with pid 76446 is not found 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76446 is not found' 00:32:02.480 15:04:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:02.480 15:04:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85081 00:32:02.480 15:04:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85081 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@835 -- # '[' -z 85081 ']' 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.480 15:04:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:02.480 15:04:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:02.480 [2024-12-09 15:04:40.418987] Starting SPDK v25.01-pre git sha1 805149865 / DPDK 24.03.0 initialization... 00:32:02.480 [2024-12-09 15:04:40.419104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85081 ] 00:32:02.480 [2024-12-09 15:04:40.568921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.738 [2024-12-09 15:04:40.644759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.305 15:04:41 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:03.305 15:04:41 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:03.305 15:04:41 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:03.563 nvme0n1 00:32:03.563 15:04:41 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:03.563 15:04:41 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:03.563 15:04:41 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:03.563 15:04:41 ftl -- ftl/common.sh@28 -- # stores=66ba270f-d813-4acc-b2e4-b3d05888c23f 00:32:03.563 15:04:41 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:03.563 15:04:41 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66ba270f-d813-4acc-b2e4-b3d05888c23f 00:32:03.821 15:04:41 ftl -- ftl/ftl.sh@23 -- # killprocess 85081 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 85081 ']' 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@958 -- # kill -0 85081 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@959 -- # uname 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85081 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:03.821 killing process with pid 85081 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85081' 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@973 -- # kill 85081 00:32:03.821 15:04:41 ftl -- common/autotest_common.sh@978 -- # wait 85081 00:32:05.197 15:04:43 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:05.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:05.197 Waiting for block devices as requested 00:32:05.458 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:05.458 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:05.458 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:05.458 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:10.746 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:10.746 15:04:48 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:10.746 Remove shared memory files 00:32:10.746 15:04:48 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:10.746 15:04:48 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:10.746 15:04:48 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:10.746 15:04:48 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:10.746 15:04:48 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:10.746 15:04:48 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:10.746 00:32:10.746 real 12m50.204s 00:32:10.746 user 15m7.538s 00:32:10.746 sys 1m20.243s 00:32:10.746 15:04:48 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.746 15:04:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:10.746 ************************************ 00:32:10.746 END TEST ftl 00:32:10.746 ************************************ 00:32:10.746 15:04:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:10.746 15:04:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:10.746 15:04:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:10.746 15:04:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:10.746 15:04:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:10.746 15:04:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:10.746 15:04:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:10.746 15:04:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:10.746 15:04:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:10.746 15:04:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:10.746 15:04:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.746 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:32:10.746 15:04:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:10.746 15:04:48 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:10.746 15:04:48 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:10.746 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:32:12.135 INFO: APP EXITING 00:32:12.135 INFO: killing all VMs 00:32:12.135 INFO: killing vhost app 00:32:12.135 INFO: EXIT DONE 00:32:12.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:12.970 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:12.970 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:12.970 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:12.970 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:13.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:13.806 Cleaning 00:32:13.806 Removing: /var/run/dpdk/spdk0/config 00:32:13.806 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:13.806 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:13.806 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:13.806 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:13.806 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:13.806 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:13.806 Removing: /var/run/dpdk/spdk0 00:32:13.806 Removing: /var/run/dpdk/spdk_pid58200 00:32:13.806 Removing: /var/run/dpdk/spdk_pid58402 00:32:13.806 Removing: /var/run/dpdk/spdk_pid58615 00:32:13.806 Removing: /var/run/dpdk/spdk_pid58713 00:32:13.806 Removing: /var/run/dpdk/spdk_pid58753 00:32:13.806 Removing: /var/run/dpdk/spdk_pid58875 00:32:13.806 Removing: /var/run/dpdk/spdk_pid58893 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59091 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59185 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59281 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59392 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59484 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59523 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59560 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59630 00:32:13.806 Removing: /var/run/dpdk/spdk_pid59737 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60178 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60231 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60294 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60310 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60423 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60439 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60541 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60557 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60610 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60628 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60687 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60705 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60878 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60914 00:32:13.806 Removing: /var/run/dpdk/spdk_pid60998 00:32:13.806 Removing: /var/run/dpdk/spdk_pid61175 00:32:13.806 Removing: /var/run/dpdk/spdk_pid61259 00:32:13.806 Removing: /var/run/dpdk/spdk_pid61296 00:32:13.806 Removing: /var/run/dpdk/spdk_pid61744 00:32:13.806 Removing: /var/run/dpdk/spdk_pid61841 00:32:13.806 Removing: /var/run/dpdk/spdk_pid61961 00:32:13.806 Removing: /var/run/dpdk/spdk_pid62014 00:32:13.806 Removing: /var/run/dpdk/spdk_pid62045 00:32:13.806 Removing: /var/run/dpdk/spdk_pid62118 00:32:13.806 Removing: /var/run/dpdk/spdk_pid62737 00:32:13.806 Removing: /var/run/dpdk/spdk_pid62773 00:32:13.806 Removing: /var/run/dpdk/spdk_pid63243 00:32:13.806 Removing: /var/run/dpdk/spdk_pid63341 00:32:13.806 Removing: /var/run/dpdk/spdk_pid63461 00:32:13.806 Removing: /var/run/dpdk/spdk_pid63509 00:32:13.806 Removing: /var/run/dpdk/spdk_pid63540 00:32:13.806 Removing: /var/run/dpdk/spdk_pid63565 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65402 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65534 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65542 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65561 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65603 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65607 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65619 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65669 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65673 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65685 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65735 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65739 00:32:13.806 Removing: /var/run/dpdk/spdk_pid65751 00:32:13.806 Removing: /var/run/dpdk/spdk_pid67135 00:32:13.806 Removing: /var/run/dpdk/spdk_pid67235 00:32:13.806 Removing: /var/run/dpdk/spdk_pid68632 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70384 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70458 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70536 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70645 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70741 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70838 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70912 00:32:13.806 Removing: /var/run/dpdk/spdk_pid70993 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71098 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71196 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71292 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71366 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71447 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71551 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71644 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71739 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71812 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71889 00:32:13.806 Removing: /var/run/dpdk/spdk_pid71993 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72090 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72181 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72250 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72330 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72400 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72474 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72577 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72668 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72763 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72837 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72911 00:32:13.806 Removing: /var/run/dpdk/spdk_pid72991 00:32:13.806 Removing: /var/run/dpdk/spdk_pid73060 00:32:13.806 Removing: /var/run/dpdk/spdk_pid73163 00:32:13.806 Removing: /var/run/dpdk/spdk_pid73254 00:32:13.806 Removing: /var/run/dpdk/spdk_pid73403 00:32:13.806 Removing: /var/run/dpdk/spdk_pid73682 00:32:13.806 Removing: /var/run/dpdk/spdk_pid73718 00:32:13.806 Removing: /var/run/dpdk/spdk_pid74180 00:32:13.806 Removing: /var/run/dpdk/spdk_pid74384 00:32:14.068 Removing: /var/run/dpdk/spdk_pid74483 00:32:14.068 Removing: /var/run/dpdk/spdk_pid74601 00:32:14.068 Removing: /var/run/dpdk/spdk_pid74654 00:32:14.068 Removing: /var/run/dpdk/spdk_pid74674 00:32:14.068 Removing: /var/run/dpdk/spdk_pid74977 00:32:14.068 Removing: /var/run/dpdk/spdk_pid75037 00:32:14.068 Removing: /var/run/dpdk/spdk_pid75110 00:32:14.068 Removing: /var/run/dpdk/spdk_pid75494 00:32:14.068 Removing: /var/run/dpdk/spdk_pid75641 00:32:14.068 Removing: /var/run/dpdk/spdk_pid76446 00:32:14.068 Removing: /var/run/dpdk/spdk_pid76579 00:32:14.068 Removing: /var/run/dpdk/spdk_pid76754 00:32:14.068 Removing: /var/run/dpdk/spdk_pid76863 00:32:14.068 Removing: /var/run/dpdk/spdk_pid77177 00:32:14.068 Removing: /var/run/dpdk/spdk_pid77437 00:32:14.068 Removing: /var/run/dpdk/spdk_pid77789 00:32:14.068 Removing: /var/run/dpdk/spdk_pid77971 00:32:14.068 Removing: /var/run/dpdk/spdk_pid78175 00:32:14.068 Removing: /var/run/dpdk/spdk_pid78228 00:32:14.068 Removing: /var/run/dpdk/spdk_pid78395 00:32:14.068 Removing: /var/run/dpdk/spdk_pid78430 00:32:14.068 Removing: /var/run/dpdk/spdk_pid78479 00:32:14.068 Removing: /var/run/dpdk/spdk_pid78711 00:32:14.068 Removing: /var/run/dpdk/spdk_pid78956 00:32:14.068 Removing: /var/run/dpdk/spdk_pid79437 00:32:14.068 Removing: /var/run/dpdk/spdk_pid80100 00:32:14.068 Removing: /var/run/dpdk/spdk_pid80724 00:32:14.068 Removing: /var/run/dpdk/spdk_pid81549 00:32:14.068 Removing: /var/run/dpdk/spdk_pid81697 00:32:14.068 Removing: /var/run/dpdk/spdk_pid81780 00:32:14.068 Removing: /var/run/dpdk/spdk_pid82302 00:32:14.068 Removing: /var/run/dpdk/spdk_pid82356 00:32:14.068 Removing: /var/run/dpdk/spdk_pid82847 00:32:14.068 Removing: /var/run/dpdk/spdk_pid83267 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84070 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84198 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84245 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84309 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84367 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84427 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84631 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84720 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84787 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84854 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84883 00:32:14.068 Removing: /var/run/dpdk/spdk_pid84945 00:32:14.068 Removing: /var/run/dpdk/spdk_pid85081 00:32:14.068 Clean 00:32:14.068 15:04:52 -- common/autotest_common.sh@1453 -- # return 0 00:32:14.068 15:04:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:14.068 15:04:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.068 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:32:14.068 15:04:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:14.068 15:04:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.069 15:04:52 -- common/autotest_common.sh@10 -- # set +x 00:32:14.330 15:04:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:14.330 15:04:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:14.331 15:04:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:14.331 15:04:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:14.331 15:04:52 -- spdk/autotest.sh@398 -- # hostname 00:32:14.331 15:04:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:14.331 geninfo: WARNING: invalid characters removed from testname! 00:32:40.941 15:05:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:43.494 15:05:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:46.050 15:05:23 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:47.959 15:05:25 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:49.863 15:05:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:52.415 15:05:30 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:54.964 15:05:32 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:54.964 15:05:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:54.964 15:05:32 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:54.964 15:05:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:54.964 15:05:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:54.964 15:05:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:54.964 + [[ -n 5022 ]] 00:32:54.964 + sudo kill 5022 00:32:54.975 [Pipeline] } 00:32:54.998 [Pipeline] // timeout 00:32:55.004 [Pipeline] } 00:32:55.019 [Pipeline] // stage 00:32:55.025 [Pipeline] } 00:32:55.040 [Pipeline] // catchError 00:32:55.051 [Pipeline] stage 00:32:55.054 [Pipeline] { (Stop VM) 00:32:55.066 [Pipeline] sh 00:32:55.360 + vagrant halt 00:32:57.905 ==> default: Halting domain... 00:33:03.296 [Pipeline] sh 00:33:03.579 + vagrant destroy -f 00:33:06.128 ==> default: Removing domain... 00:33:07.087 [Pipeline] sh 00:33:07.377 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:07.387 [Pipeline] } 00:33:07.400 [Pipeline] // stage 00:33:07.404 [Pipeline] } 00:33:07.417 [Pipeline] // dir 00:33:07.422 [Pipeline] } 00:33:07.433 [Pipeline] // wrap 00:33:07.436 [Pipeline] } 00:33:07.450 [Pipeline] // catchError 00:33:07.466 [Pipeline] stage 00:33:07.468 [Pipeline] { (Epilogue) 00:33:07.478 [Pipeline] sh 00:33:07.763 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:13.067 [Pipeline] catchError 00:33:13.068 [Pipeline] { 00:33:13.079 [Pipeline] sh 00:33:13.361 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:13.361 Artifacts sizes are good 00:33:13.372 [Pipeline] } 00:33:13.386 [Pipeline] // catchError 00:33:13.396 [Pipeline] archiveArtifacts 00:33:13.402 Archiving artifacts 00:33:13.506 [Pipeline] cleanWs 00:33:13.518 [WS-CLEANUP] Deleting project workspace... 00:33:13.518 [WS-CLEANUP] Deferred wipeout is used... 00:33:13.524 [WS-CLEANUP] done 00:33:13.526 [Pipeline] } 00:33:13.541 [Pipeline] // stage 00:33:13.546 [Pipeline] } 00:33:13.559 [Pipeline] // node 00:33:13.564 [Pipeline] End of Pipeline 00:33:13.603 Finished: SUCCESS